diff options
346 files changed, 14523 insertions, 8786 deletions
diff --git a/context/data/scite/scite-context-readme.pdf b/context/data/scite/context/documents/scite-context-readme.pdf Binary files differindex 99f05a2a5..2bd7d4216 100644 --- a/context/data/scite/scite-context-readme.pdf +++ b/context/data/scite/context/documents/scite-context-readme.pdf diff --git a/context/data/scite/scite-context-readme.tex b/context/data/scite/context/documents/scite-context-readme.tex index 42f5e0a98..cbfc00a33 100644 --- a/context/data/scite/scite-context-readme.tex +++ b/context/data/scite/context/documents/scite-context-readme.tex @@ -191,60 +191,115 @@ You need to add this path to your local path definition. Installing \SCITE\ to some known place has the advantage that you can move it around. There are no special dependencies on the operating system. +On \MSWINDOWS\ you can for instance install \SCITE\ in: + +\starttyping +c:\data\system\scite +\stoptyping + +and then end up with: + +\starttyping +c:\data\system\scite\wscite +\stoptyping + +and that is the path you need to add to your environment \type {PATH} variable. + +On \LINUX\ the files end up in: + +\starttyping +/usr/bin +/usr/share/scite +\stoptyping + +Where the second path is the path we will put more files. + +\subject{Installing \type {scintillua}} + Next you need to install the lpeg lexers. \footnote {Versions later than 2.11 will not run on \MSWINDOWS\ 2K. In that case you need to comment the external -lexer import.} These can be fetched from: +lexer import.} The library is part of the \type {textadept} editor by Mitchell +(\hyphenatedurl {mitchell.att.foicica.com}) which is also based on scintilla: +The archive can be fetched from: \starttyping http://foicica.com/scintillua/ \stoptyping -On \MSWINDOWS\ you need to copy the \type {lexers} subfolder to the \type -{wscite} folder. For \LINUX\ the place depends on the distribution and I just -copy them in the same path as where the regular properties files live. \footnote -{If you update, don't do so without testing first. Sometimes there are changes in -\SCITE\ that influence the lexers in which case you have to wait till we have -update them to suit those changes.} +On \MSWINDOWS\ you need to copy the files to the \type {wscite} folder (so we end +up with a \type {lexers} subfolder there). For \LINUX\ the place depends on the +distribution, for instance \type {/usr/share/scite}; this is the place where the +regular properties files live. \footnote {If you update, don't do so without +testing first. Sometimes there are changes in \SCITE\ that influence the lexers +in which case you have to wait till we have update them to suit those changes.} -For \UNIX, one can take a precompiled version as well. Here we might need to split -the set of files into: +So, you end up, on \MSWINDOWS\ with: \starttyping -/usr/bin -/usr/share/scite +c:\data\system\scite\wscite\lexers \stoptyping -The second path is hard coded in the binary and moving all files there probably works -okay. Beware: if you're on a 64 bit system, you need to rename the 64 bit \type {so} -library. +And on \LINUX: + +\starttyping +/usr/share/scite/lexers +\stoptyping -If you want to use \CONTEXT, you need to copy the relevant files from +Beware: if you're on a 64 bit system, you need to rename the 64 bit \type {so} +library into one without a number. Unfortunately the 64 bit library is now always +available which can give surprises when the operating system gets updates. In such +a case you should downgrade or use \type {wine} with the \MSWINDOWS\ binaries +instead. After installation you need to restart \SCITE\ in order to see if things +work out as expected. + +\subject{Installing the \CONTEXT\ lexers} + +When we started using this nice extension, we ran into issues and as a +consequence shipped a patched \LUA\ code. We also needed some more control as we +wanted to provide more features and complex nested lexers. Because the library +\API\ changed a couple of times, we now have our own variant which will be +cleaned up over time to be more consistent with our other \LUA\ code (so that we +can also use it in \CONTEXT\ as variant verbatim lexer). We hope to be able to +use the \type {scintillua} library as it does the job. + +Anyway, if you want to use \CONTEXT, you need to copy the relevant files from \starttyping <texroot>/tex/texmf-context/context/data/scite \stoptyping -to the path were \SCITE\ keeps its property files (\type {*.properties}). There -is a file called \type {SciteGlobal.properties}. At the end of that file (on -\MSWINDOWS\ it is in the path where the Scite binary) you then add a line to the -end: +to the path were \SCITE\ keeps its property files (\type {*.properties}). This is +the path we already mentioned. There should be a file there called \type +{SciteGlobal.properties}. + +So,in the end you get on \MSWINDOWS\ new files in: \starttyping -import scite-context-user +c:\data\system\scite\wscite +c:\data\system\scite\wscite\context +c:\data\system\scite\wscite\context\lexer +c:\data\system\scite\wscite\context\lexer\themes +c:\data\system\scite\wscite\context\lexer\data +c:\data\system\scite\wscite\context\documents \stoptyping -You need to restart \SCITE\ in order to see if things work out as expected. - -Disabling the external lexer in a recent \SCITE\ is somewhat tricky. In that case -the end of that file looks like: +while on \LINUX\ you get: \starttyping -imports.exclude=scite-context-external -import * -import scite-context-user +/usr/bin/share/ +/usr/bin/share/context +/usr/bin/share/context/lexer +/usr/bin/share/context/lexer/themes +/usr/bin/share/context/lexer/data +/usr/bin/share/context/documents \stoptyping -In any case you need to make sure that the user file is loaded last. +At the end of the \type {SciteGlobal.properties} you need to add the following +line: + +\starttyping +import context/scite-context-user +\stoptyping After this, things should run as expected (given that \TEX\ runs at the console as well). @@ -266,102 +321,15 @@ The configuration file defaults to the Dejavu fonts. These free fonts are part o the \CONTEXT\ suite (also known as the standalone distribution). Of course you can fetch them from \type {http://dejavu-fonts.org} as well. You have to copy them to where your operating system expects them. In the suite they are available -in +in: \starttyping <contextroot>/tex/texmf/fonts/truetype/public/dejavu \stoptyping -\subject{An alternative approach} - -If for some reason you prefer not to mess with property files in the main \SCITE\ -path, you can follow a different route and selectively copy files to places. - -The following files are needed for the lpeg based lexer: - -\starttyping -lexers/scite-context-lexer.lua -lexers/scite-context-lexer-tex.lua -lexers/scite-context-lexer-mps.lua -lexers/scite-context-lexer-lua.lua -lexers/scite-context-lexer-cld.lua -lexers/scite-context-lexer-txt.lua -lexers/scite-context-lexer-xml*.lua -lexers/scite-context-lexer-pdf*.lua - -lexers/context/data/scite-context-data-tex.lua -lexers/context/data/scite-context-data-context.lua -lexers/context/data/scite-context-data-interfaces.lua -lexers/context/data/scite-context-data-metapost.lua -lexers/context/data/scite-context-data-metafun.lua - -lexers/themes/scite-context-theme.lua -\stoptyping - -The data files are needed because we cannot access property files from within the -lexer. If we could open a file we could use the property files instead. - -These files go to the \type {lexers} subpath in your \SCITE\ installation. -Normally this sits in the binary path. The following files provide some -extensions. On \MSWINDOWS\ you can copy these files to the path where the \SCITE\ -binary lives. - -\starttyping -scite-ctx.lua -\stoptyping - -Because property files can only be loaded from the same path where the (user) -file loads them you need to copy the following files to the same path where the -loading is defined: - -\starttyping -scite-context.properties -scite-context-internal.properties -scite-context-external.properties - -scite-pragma.properties - -scite-tex.properties -scite-metapost.properties - -scite-context-data-tex.properties -scite-context-data-context.properties -scite-context-data-interfaces.properties -scite-context-data-metapost.properties -scite-context-data-metafun.properties - -scite-ctx.properties -scite-ctx-context.properties -scite-ctx-example.properties -\stoptyping - -On \MSWINDOWS\ these go to: - -\starttyping -c:/Users/YourName -\stoptyping - -Next you need to add this to: - -\starttyping -import scite-context -import scite-context-internal -import scite-context-external -import scite-pragma -\stoptyping - -to the file: - -\starttyping -SciTEUser.properties -\stoptyping - -Of course the pragma import is optional. You can comment either the internal or -external variant but there is no reason not to keep them both. - \subject{Extensions} -Just a quick not to some extensions. If you select a part of the text (normally +Just a quick note to some extensions. If you select a part of the text (normally you do this with the shift key pressed) and you hit \type {Shift-F11}, you get a menu with some options. More (robust) ones will be provided at some point. @@ -388,6 +356,27 @@ disable it). Wrong words are colored red, and words that might have a case problem are colored orange. Recognized words are greyed and words with less than three characters are ignored. +A spell checking file has to be put in the \type {lexers/data} directory and +looks as follows (e.g. \type {spell-uk.lua}): + +\starttyping +return { + ["max"]=40, + ["min"]=3, + ["n"]=151493, + ["words"]={ + ["aardvark"]="aardvark", + ["aardvarks"]="aardvarks", + ["aardwolf"]="aardwolf", + ["aardwolves"]="aardwolves", + ... + } +} +\stoptyping + +The keys are words that get checked for the given value (which can have uppercase +characters). The word files are not distributed (but they might be at some point). + In the case of internal lexers, the following file is needed: \starttyping @@ -451,8 +440,8 @@ releases. \subject{The external lexers} -These are the more advanced. They provide more detail and the \CONTEXT\ lexer -also supports nested \METAPOST\ and \LUA. Currently there is no detailed +These are the more advanced lexers. They provide more detail and the \CONTEXT\ +lexer also supports nested \METAPOST\ and \LUA. Currently there is no detailed configuration but this might change once they are stable. The external lexers operate on documents while the internal ones operate on @@ -463,13 +452,6 @@ garbage collecting many small tables comes at a price. Of course in practice thi probably gets unnoticed. \footnote {I wrote the code in 2011 on a more than 5 years old Dell M90 laptop, so I suppose that speed is less an issue now.} -In principle the external lexers can be used with \type {textadept} which also -uses \type {scintilla}. Actually, support for lpeg lexing originates in \type -{textadept}. Currently \type {textadept} lacks a couple of features I like about -\SCITE\ (for instance it has no realtime logpane) and it's also still changing. -At some point the \CONTEXT\ distribution might ship with files for \type -{textadept} as well. - The external lpeg lexers work okay with the \MSWINDOWS\ and \LINUX\ versions of \SCITE, but unfortunately at the time of writing this, the \LUA\ library that is needed is not available for the \MACOSX\ version of \SCITE. Also, due to the fact @@ -480,7 +462,7 @@ In addition to \CONTEXT\ and \METAFUN\ lexing a \LUA\ lexer is also provided so that we can handle \CONTEXT\ \LUA\ Document (\CLD) files too. There is also an \XML\ lexer. This one also provides spell checking. The \PDF\ lexer tries to do a good job on \PDF\ files, but it has some limitations. There is also a simple text -file lexer that does spell checking. +file lexer that does spell checking. Finally there is a lexer for \CWEB\ files. Don't worry if you see an orange rectangle in your \TEX\ or \XML\ document. This indicates that there is a special space character there, for instance \type diff --git a/context/data/scite/scite-context-visual.pdf b/context/data/scite/context/documents/scite-context-visual.pdf index 69d82eda6..69d82eda6 100644 --- a/context/data/scite/scite-context-visual.pdf +++ b/context/data/scite/context/documents/scite-context-visual.pdf diff --git a/context/data/scite/scite-context-visual.png b/context/data/scite/context/documents/scite-context-visual.png Binary files differindex 7368a68f1..7368a68f1 100644 --- a/context/data/scite/scite-context-visual.png +++ b/context/data/scite/context/documents/scite-context-visual.png diff --git a/context/data/scite/context/lexers/data/scite-context-data-context.lua b/context/data/scite/context/lexers/data/scite-context-data-context.lua new file mode 100644 index 000000000..6c0293fbd --- /dev/null +++ b/context/data/scite/context/lexers/data/scite-context-data-context.lua @@ -0,0 +1,4 @@ +return { + ["constants"]={ "zerocount", "minusone", "minustwo", "plusone", "plustwo", "plusthree", "plusfour", "plusfive", "plussix", "plusseven", "pluseight", "plusnine", "plusten", "plussixteen", "plushundred", "plusthousand", "plustenthousand", "plustwentythousand", "medcard", "maxcard", "zeropoint", "onepoint", "halfapoint", "onebasepoint", "maxdimen", "scaledpoint", "thousandpoint", "points", "halfpoint", "zeroskip", "zeromuskip", "onemuskip", "pluscxxvii", "pluscxxviii", "pluscclv", "pluscclvi", "normalpagebox", "endoflinetoken", "outputnewlinechar", "emptytoks", "empty", "undefined", "voidbox", "emptybox", "emptyvbox", "emptyhbox", "bigskipamount", "medskipamount", "smallskipamount", "fmtname", "fmtversion", "texengine", "texenginename", "texengineversion", "luatexengine", "pdftexengine", "xetexengine", "unknownengine", "etexversion", "pdftexversion", "xetexversion", "xetexrevision", "activecatcode", "bgroup", "egroup", "endline", "conditionaltrue", "conditionalfalse", "attributeunsetvalue", "uprotationangle", "rightrotationangle", "downrotationangle", "leftrotationangle", "inicatcodes", "ctxcatcodes", "texcatcodes", "notcatcodes", "txtcatcodes", "vrbcatcodes", "prtcatcodes", "nilcatcodes", "luacatcodes", "tpacatcodes", "tpbcatcodes", "xmlcatcodes", "ctdcatcodes", "escapecatcode", "begingroupcatcode", "endgroupcatcode", "mathshiftcatcode", "alignmentcatcode", "endoflinecatcode", "parametercatcode", "superscriptcatcode", "subscriptcatcode", "ignorecatcode", "spacecatcode", "lettercatcode", "othercatcode", "activecatcode", "commentcatcode", "invalidcatcode", "tabasciicode", "newlineasciicode", "formfeedasciicode", "endoflineasciicode", "endoffileasciicode", "spaceasciicode", "hashasciicode", "dollarasciicode", "commentasciicode", "ampersandasciicode", "colonasciicode", "backslashasciicode", "circumflexasciicode", "underscoreasciicode", "leftbraceasciicode", "barasciicode", "rightbraceasciicode", "tildeasciicode", "delasciicode", "lessthanasciicode", "morethanasciicode", "doublecommentsignal", "atsignasciicode", "exclamationmarkasciicode", "questionmarkasciicode", "doublequoteasciicode", "singlequoteasciicode", "forwardslashasciicode", "primeasciicode", "activemathcharcode", "activetabtoken", "activeformfeedtoken", "activeendoflinetoken", "batchmodecode", "nonstopmodecode", "scrollmodecode", "errorstopmodecode", "bottomlevelgroupcode", "simplegroupcode", "hboxgroupcode", "adjustedhboxgroupcode", "vboxgroupcode", "vtopgroupcode", "aligngroupcode", "noaligngroupcode", "outputgroupcode", "mathgroupcode", "discretionarygroupcode", "insertgroupcode", "vcentergroupcode", "mathchoicegroupcode", "semisimplegroupcode", "mathshiftgroupcode", "mathleftgroupcode", "vadjustgroupcode", "charnodecode", "hlistnodecode", "vlistnodecode", "rulenodecode", "insertnodecode", "marknodecode", "adjustnodecode", "ligaturenodecode", "discretionarynodecode", "whatsitnodecode", "mathnodecode", "gluenodecode", "kernnodecode", "penaltynodecode", "unsetnodecode", "mathsnodecode", "charifcode", "catifcode", "numifcode", "dimifcode", "oddifcode", "vmodeifcode", "hmodeifcode", "mmodeifcode", "innerifcode", "voidifcode", "hboxifcode", "vboxifcode", "xifcode", "eofifcode", "trueifcode", "falseifcode", "caseifcode", "definedifcode", "csnameifcode", "fontcharifcode", "fontslantperpoint", "fontinterwordspace", "fontinterwordstretch", "fontinterwordshrink", "fontexheight", "fontemwidth", "fontextraspace", "slantperpoint", "interwordspace", "interwordstretch", "interwordshrink", "exheight", "emwidth", "extraspace", "mathsupdisplay", "mathsupnormal", "mathsupcramped", "mathsubnormal", "mathsubcombined", "mathaxisheight", "startmode", "stopmode", "startnotmode", "stopnotmode", "startmodeset", "stopmodeset", "doifmode", "doifmodeelse", "doifnotmode", "startmodeset", "stopmodeset", "startallmodes", "stopallmodes", "startnotallmodes", "stopnotallmodes", "doifallmodes", "doifallmodeselse", "doifnotallmodes", "startenvironment", "stopenvironment", "environment", "startcomponent", "stopcomponent", "component", "startproduct", "stopproduct", "product", "startproject", "stopproject", "project", "starttext", "stoptext", "startnotext", "stopnotext", "startdocument", "stopdocument", "documentvariable", "setupdocument", "startmodule", "stopmodule", "usemodule", "usetexmodule", "useluamodule", "setupmodule", "currentmoduleparameter", "moduleparameter", "startTEXpage", "stopTEXpage", "enablemode", "disablemode", "preventmode", "globalenablemode", "globaldisablemode", "globalpreventmode", "pushmode", "popmode", "typescriptone", "typescripttwo", "typescriptthree", "mathsizesuffix", "mathordcode", "mathopcode", "mathbincode", "mathrelcode", "mathopencode", "mathclosecode", "mathpunctcode", "mathalphacode", "mathinnercode", "mathnothingcode", "mathlimopcode", "mathnolopcode", "mathboxcode", "mathchoicecode", "mathaccentcode", "mathradicalcode", "constantnumber", "constantnumberargument", "constantdimen", "constantdimenargument", "constantemptyargument", "continueifinputfile", "luastringsep", "!!bs", "!!es", "lefttorightmark", "righttoleftmark", "breakablethinspace", "nobreakspace", "narrownobreakspace", "zerowidthnobreakspace", "ideographicspace", "ideographichalffillspace", "twoperemspace", "threeperemspace", "fourperemspace", "fiveperemspace", "sixperemspace", "figurespace", "punctuationspace", "hairspace", "zerowidthspace", "zerowidthnonjoiner", "zerowidthjoiner", "zwnj", "zwj" }, + ["helpers"]={ "startsetups", "stopsetups", "startxmlsetups", "stopxmlsetups", "startluasetups", "stopluasetups", "starttexsetups", "stoptexsetups", "startrawsetups", "stoprawsetups", "startlocalsetups", "stoplocalsetups", "starttexdefinition", "stoptexdefinition", "starttexcode", "stoptexcode", "startcontextcode", "stopcontextcode", "startcontextdefinitioncode", "stopcontextdefinitioncode", "doifsetupselse", "doifsetups", "doifnotsetups", "setup", "setups", "texsetup", "xmlsetup", "luasetup", "directsetup", "doifelsecommandhandler", "doifnotcommandhandler", "doifcommandhandler", "newmode", "setmode", "resetmode", "newsystemmode", "setsystemmode", "resetsystemmode", "pushsystemmode", "popsystemmode", "booleanmodevalue", "newcount", "newdimen", "newskip", "newmuskip", "newbox", "newtoks", "newread", "newwrite", "newmarks", "newinsert", "newattribute", "newif", "newlanguage", "newfamily", "newfam", "newhelp", "then", "begcsname", "strippedcsname", "firstargumentfalse", "firstargumenttrue", "secondargumentfalse", "secondargumenttrue", "thirdargumentfalse", "thirdargumenttrue", "fourthargumentfalse", "fourthargumenttrue", "fifthargumentfalse", "fifthsargumenttrue", "sixthargumentfalse", "sixtsargumenttrue", "doglobal", "dodoglobal", "redoglobal", "resetglobal", "donothing", "dontcomplain", "forgetall", "donetrue", "donefalse", "htdp", "unvoidbox", "hfilll", "vfilll", "mathbox", "mathlimop", "mathnolop", "mathnothing", "mathalpha", "currentcatcodetable", "defaultcatcodetable", "catcodetablename", "newcatcodetable", "startcatcodetable", "stopcatcodetable", "startextendcatcodetable", "stopextendcatcodetable", "pushcatcodetable", "popcatcodetable", "restorecatcodes", "setcatcodetable", "letcatcodecommand", "defcatcodecommand", "uedcatcodecommand", "hglue", "vglue", "hfillneg", "vfillneg", "hfilllneg", "vfilllneg", "ruledhss", "ruledhfil", "ruledhfill", "ruledhfilneg", "ruledhfillneg", "normalhfillneg", "ruledvss", "ruledvfil", "ruledvfill", "ruledvfilneg", "ruledvfillneg", "normalvfillneg", "ruledhbox", "ruledvbox", "ruledvtop", "ruledvcenter", "ruledmbox", "ruledhskip", "ruledvskip", "ruledkern", "ruledmskip", "ruledmkern", "ruledhglue", "ruledvglue", "normalhglue", "normalvglue", "ruledpenalty", "filledhboxb", "filledhboxr", "filledhboxg", "filledhboxc", "filledhboxm", "filledhboxy", "filledhboxk", "scratchcounter", "globalscratchcounter", "scratchdimen", "globalscratchdimen", "scratchskip", "globalscratchskip", "scratchmuskip", "globalscratchmuskip", "scratchtoks", "globalscratchtoks", "scratchbox", "globalscratchbox", "normalbaselineskip", "normallineskip", "normallineskiplimit", "availablehsize", "localhsize", "setlocalhsize", "nextbox", "dowithnextbox", "dowithnextboxcs", "dowithnextboxcontent", "dowithnextboxcontentcs", "scratchwidth", "scratchheight", "scratchdepth", "scratchoffset", "scratchdistance", "scratchhsize", "scratchvsize", "scratchxoffset", "scratchyoffset", "scratchhoffset", "scratchvoffset", "scratchxposition", "scratchyposition", "scratchtopoffset", "scratchbottomoffset", "scratchleftoffset", "scratchrightoffset", "scratchcounterone", "scratchcountertwo", "scratchcounterthree", "scratchdimenone", "scratchdimentwo", "scratchdimenthree", "scratchskipone", "scratchskiptwo", "scratchskipthree", "scratchmuskipone", "scratchmuskiptwo", "scratchmuskipthree", "scratchtoksone", "scratchtokstwo", "scratchtoksthree", "scratchboxone", "scratchboxtwo", "scratchboxthree", "scratchnx", "scratchny", "scratchmx", "scratchmy", "scratchunicode", "scratchleftskip", "scratchrightskip", "scratchtopskip", "scratchbottomskip", "doif", "doifnot", "doifelse", "doifinset", "doifnotinset", "doifinsetelse", "doifnextcharelse", "doifnextoptionalelse", "doifnextoptionalcselse", "doiffastoptionalcheckelse", "doifnextbgroupelse", "doifnextbgroupcselse", "doifnextparenthesiselse", "doifundefinedelse", "doifdefinedelse", "doifundefined", "doifdefined", "doifelsevalue", "doifvalue", "doifnotvalue", "doifnothing", "doifsomething", "doifelsenothing", "doifsomethingelse", "doifvaluenothing", "doifvaluesomething", "doifelsevaluenothing", "doifdimensionelse", "doifnumberelse", "doifnumber", "doifnotnumber", "doifcommonelse", "doifcommon", "doifnotcommon", "doifinstring", "doifnotinstring", "doifinstringelse", "doifassignmentelse", "docheckassignment", "tracingall", "tracingnone", "loggingall", "removetoks", "appendtoks", "prependtoks", "appendtotoks", "prependtotoks", "to", "endgraf", "endpar", "everyendpar", "reseteverypar", "finishpar", "empty", "null", "space", "quad", "enspace", "obeyspaces", "obeylines", "obeyedspace", "obeyedline", "normalspace", "executeifdefined", "singleexpandafter", "doubleexpandafter", "tripleexpandafter", "dontleavehmode", "removelastspace", "removeunwantedspaces", "keepunwantedspaces", "wait", "writestatus", "define", "defineexpandable", "redefine", "setmeasure", "setemeasure", "setgmeasure", "setxmeasure", "definemeasure", "freezemeasure", "measure", "measured", "installcorenamespace", "getvalue", "getuvalue", "setvalue", "setevalue", "setgvalue", "setxvalue", "letvalue", "letgvalue", "resetvalue", "undefinevalue", "ignorevalue", "setuvalue", "setuevalue", "setugvalue", "setuxvalue", "globallet", "glet", "udef", "ugdef", "uedef", "uxdef", "checked", "unique", "getparameters", "geteparameters", "getgparameters", "getxparameters", "forgetparameters", "copyparameters", "getdummyparameters", "dummyparameter", "directdummyparameter", "setdummyparameter", "letdummyparameter", "usedummystyleandcolor", "usedummystyleparameter", "usedummycolorparameter", "processcommalist", "processcommacommand", "quitcommalist", "quitprevcommalist", "processaction", "processallactions", "processfirstactioninset", "processallactionsinset", "unexpanded", "expanded", "startexpanded", "stopexpanded", "protected", "protect", "unprotect", "firstofoneargument", "firstoftwoarguments", "secondoftwoarguments", "firstofthreearguments", "secondofthreearguments", "thirdofthreearguments", "firstoffourarguments", "secondoffourarguments", "thirdoffourarguments", "fourthoffourarguments", "firstoffivearguments", "secondoffivearguments", "thirdoffivearguments", "fourthoffivearguments", "fifthoffivearguments", "firstofsixarguments", "secondofsixarguments", "thirdofsixarguments", "fourthofsixarguments", "fifthofsixarguments", "sixthofsixarguments", "firstofoneunexpanded", "gobbleoneargument", "gobbletwoarguments", "gobblethreearguments", "gobblefourarguments", "gobblefivearguments", "gobblesixarguments", "gobblesevenarguments", "gobbleeightarguments", "gobbleninearguments", "gobbletenarguments", "gobbleoneoptional", "gobbletwooptionals", "gobblethreeoptionals", "gobblefouroptionals", "gobblefiveoptionals", "dorecurse", "doloop", "exitloop", "dostepwiserecurse", "recurselevel", "recursedepth", "dofastloopcs", "dowith", "newconstant", "setnewconstant", "setconstant", "setconstantvalue", "newconditional", "settrue", "setfalse", "settruevalue", "setfalsevalue", "newmacro", "setnewmacro", "newfraction", "newsignal", "dosingleempty", "dodoubleempty", "dotripleempty", "doquadrupleempty", "doquintupleempty", "dosixtupleempty", "doseventupleempty", "dosingleargument", "dodoubleargument", "dotripleargument", "doquadrupleargument", "doquintupleargument", "dosixtupleargument", "doseventupleargument", "dosinglegroupempty", "dodoublegroupempty", "dotriplegroupempty", "doquadruplegroupempty", "doquintuplegroupempty", "permitspacesbetweengroups", "dontpermitspacesbetweengroups", "nopdfcompression", "maximumpdfcompression", "normalpdfcompression", "modulonumber", "dividenumber", "getfirstcharacter", "doiffirstcharelse", "startnointerference", "stopnointerference", "twodigits", "threedigits", "leftorright", "strut", "setstrut", "strutbox", "strutht", "strutdp", "strutwd", "struthtdp", "begstrut", "endstrut", "lineheight", "ordordspacing", "ordopspacing", "ordbinspacing", "ordrelspacing", "ordopenspacing", "ordclosespacing", "ordpunctspacing", "ordinnerspacing", "opordspacing", "opopspacing", "opbinspacing", "oprelspacing", "opopenspacing", "opclosespacing", "oppunctspacing", "opinnerspacing", "binordspacing", "binopspacing", "binbinspacing", "binrelspacing", "binopenspacing", "binclosespacing", "binpunctspacing", "bininnerspacing", "relordspacing", "relopspacing", "relbinspacing", "relrelspacing", "relopenspacing", "relclosespacing", "relpunctspacing", "relinnerspacing", "openordspacing", "openopspacing", "openbinspacing", "openrelspacing", "openopenspacing", "openclosespacing", "openpunctspacing", "openinnerspacing", "closeordspacing", "closeopspacing", "closebinspacing", "closerelspacing", "closeopenspacing", "closeclosespacing", "closepunctspacing", "closeinnerspacing", "punctordspacing", "punctopspacing", "punctbinspacing", "punctrelspacing", "punctopenspacing", "punctclosespacing", "punctpunctspacing", "punctinnerspacing", "innerordspacing", "inneropspacing", "innerbinspacing", "innerrelspacing", "inneropenspacing", "innerclosespacing", "innerpunctspacing", "innerinnerspacing", "normalreqno", "startimath", "stopimath", "normalstartimath", "normalstopimath", "startdmath", "stopdmath", "normalstartdmath", "normalstopdmath", "uncramped", "cramped", "triggermathstyle", "mathstylefont", "mathsmallstylefont", "mathstyleface", "mathsmallstyleface", "mathstylecommand", "mathpalette", "mathstylehbox", "mathstylevbox", "mathstylevcenter", "mathstylevcenteredhbox", "mathstylevcenteredvbox", "mathtext", "setmathsmalltextbox", "setmathtextbox", "triggerdisplaystyle", "triggertextstyle", "triggerscriptstyle", "triggerscriptscriptstyle", "triggeruncrampedstyle", "triggercrampedstyle", "triggersmallstyle", "triggeruncrampedsmallstyle", "triggercrampedsmallstyle", "triggerbigstyle", "triggeruncrampedbigstyle", "triggercrampedbigstyle", "luaexpr", "expdoifelse", "expdoif", "expdoifnot", "expdoifcommonelse", "expdoifinsetelse", "ctxdirectlua", "ctxlatelua", "ctxsprint", "ctxwrite", "ctxcommand", "ctxdirectcommand", "ctxlatecommand", "ctxreport", "ctxlua", "luacode", "lateluacode", "directluacode", "registerctxluafile", "ctxloadluafile", "luaversion", "luamajorversion", "luaminorversion", "ctxluacode", "luaconditional", "luaexpanded", "startluaparameterset", "stopluaparameterset", "luaparameterset", "definenamedlua", "obeylualines", "obeyluatokens", "startluacode", "stopluacode", "startlua", "stoplua", "startctxfunction", "stopctxfunction", "ctxfunction", "startctxfunctiondefinition", "stopctxfunctiondefinition", "carryoverpar", "assumelongusagecs", "Umathbotaccent", "righttolefthbox", "lefttorighthbox", "righttoleftvbox", "lefttorightvbox", "righttoleftvtop", "lefttorightvtop", "rtlhbox", "ltrhbox", "rtlvbox", "ltrvbox", "rtlvtop", "ltrvtop", "autodirhbox", "autodirvbox", "autodirvtop", "lefttoright", "righttoleft", "synchronizelayoutdirection", "synchronizedisplaydirection", "synchronizeinlinedirection", "lesshyphens", "morehyphens", "nohyphens", "dohyphens", "Ucheckedstartdisplaymath", "Ucheckedstopdisplaymath" }, +}
\ No newline at end of file diff --git a/context/data/scite/lexers/data/scite-context-data-interfaces.lua b/context/data/scite/context/lexers/data/scite-context-data-interfaces.lua index b2c09b62a..b2c09b62a 100644 --- a/context/data/scite/lexers/data/scite-context-data-interfaces.lua +++ b/context/data/scite/context/lexers/data/scite-context-data-interfaces.lua diff --git a/context/data/scite/lexers/data/scite-context-data-metafun.lua b/context/data/scite/context/lexers/data/scite-context-data-metafun.lua index 50b9ecec4..50b9ecec4 100644 --- a/context/data/scite/lexers/data/scite-context-data-metafun.lua +++ b/context/data/scite/context/lexers/data/scite-context-data-metafun.lua diff --git a/context/data/scite/lexers/data/scite-context-data-metapost.lua b/context/data/scite/context/lexers/data/scite-context-data-metapost.lua index 766ea90da..766ea90da 100644 --- a/context/data/scite/lexers/data/scite-context-data-metapost.lua +++ b/context/data/scite/context/lexers/data/scite-context-data-metapost.lua diff --git a/context/data/scite/lexers/data/scite-context-data-tex.lua b/context/data/scite/context/lexers/data/scite-context-data-tex.lua index 7d710740c..415b74128 100644 --- a/context/data/scite/lexers/data/scite-context-data-tex.lua +++ b/context/data/scite/context/lexers/data/scite-context-data-tex.lua @@ -1,9 +1,9 @@ return { ["aleph"]={ "AlephVersion", "Alephminorversion", "Alephrevision", "Alephversion", "Omegaminorversion", "Omegarevision", "Omegaversion", "boxdir", "pagebottomoffset", "pagerightoffset" }, ["etex"]={ "botmarks", "clubpenalties", "currentgrouplevel", "currentgrouptype", "currentifbranch", "currentiflevel", "currentiftype", "detokenize", "dimexpr", "displaywidowpenalties", "eTeXVersion", "eTeXminorversion", "eTeXrevision", "eTeXversion", "everyeof", "firstmarks", "fontchardp", "fontcharht", "fontcharic", "fontcharwd", "glueexpr", "glueshrink", "glueshrinkorder", "gluestretch", "gluestretchorder", "gluetomu", "ifcsname", "ifdefined", "iffontchar", "interactionmode", "interlinepenalties", "lastlinefit", "lastnodetype", "marks", "muexpr", "mutoglue", "numexpr", "pagediscards", "parshapedimen", "parshapeindent", "parshapelength", "predisplaydirection", "protected", "readline", "savinghyphcodes", "savingvdiscards", "scantokens", "showgroups", "showifs", "showtokens", "splitbotmarks", "splitdiscards", "splitfirstmarks", "topmarks", "tracingassigns", "tracinggroups", "tracingifs", "tracingnesting", "tracingscantokens", "unexpanded", "unless", "widowpenalties" }, - ["luatex"]={ "Uchar", "Udelcode", "Udelcodenum", "Udelimiter", "Udelimiterover", "Udelimiterunder", "Umathaccent", "Umathaxis", "Umathbinbinspacing", "Umathbinclosespacing", "Umathbininnerspacing", "Umathbinopenspacing", "Umathbinopspacing", "Umathbinordspacing", "Umathbinpunctspacing", "Umathbinrelspacing", "Umathchar", "Umathchardef", "Umathcharnum", "Umathclosebinspacing", "Umathcloseclosespacing", "Umathcloseinnerspacing", "Umathcloseopenspacing", "Umathcloseopspacing", "Umathcloseordspacing", "Umathclosepunctspacing", "Umathcloserelspacing", "Umathcode", "Umathcodenum", "Umathconnectoroverlapmin", "Umathfractiondelsize", "Umathfractiondenomdown", "Umathfractiondenomvgap", "Umathfractionnumup", "Umathfractionnumvgap", "Umathfractionrule", "Umathinnerbinspacing", "Umathinnerclosespacing", "Umathinnerinnerspacing", "Umathinneropenspacing", "Umathinneropspacing", "Umathinnerordspacing", "Umathinnerpunctspacing", "Umathinnerrelspacing", "Umathlimitabovebgap", "Umathlimitabovekern", "Umathlimitabovevgap", "Umathlimitbelowbgap", "Umathlimitbelowkern", "Umathlimitbelowvgap", "Umathopbinspacing", "Umathopclosespacing", "Umathopenbinspacing", "Umathopenclosespacing", "Umathopeninnerspacing", "Umathopenopenspacing", "Umathopenopspacing", "Umathopenordspacing", "Umathopenpunctspacing", "Umathopenrelspacing", "Umathoperatorsize", "Umathopinnerspacing", "Umathopopenspacing", "Umathopopspacing", "Umathopordspacing", "Umathoppunctspacing", "Umathoprelspacing", "Umathordbinspacing", "Umathordclosespacing", "Umathordinnerspacing", "Umathordopenspacing", "Umathordopspacing", "Umathordordspacing", "Umathordpunctspacing", "Umathordrelspacing", "Umathoverbarkern", "Umathoverbarrule", "Umathoverbarvgap", "Umathoverdelimiterbgap", "Umathoverdelimitervgap", "Umathpunctbinspacing", "Umathpunctclosespacing", "Umathpunctinnerspacing", "Umathpunctopenspacing", "Umathpunctopspacing", "Umathpunctordspacing", "Umathpunctpunctspacing", "Umathpunctrelspacing", "Umathquad", "Umathradicaldegreeafter", "Umathradicaldegreebefore", "Umathradicaldegreeraise", "Umathradicalkern", "Umathradicalrule", "Umathradicalvgap", "Umathrelbinspacing", "Umathrelclosespacing", "Umathrelinnerspacing", "Umathrelopenspacing", "Umathrelopspacing", "Umathrelordspacing", "Umathrelpunctspacing", "Umathrelrelspacing", "Umathspaceafterscript", "Umathstackdenomdown", "Umathstacknumup", "Umathstackvgap", "Umathsubshiftdown", "Umathsubshiftdrop", "Umathsubsupshiftdown", "Umathsubsupvgap", "Umathsubtopmax", "Umathsupbottommin", "Umathsupshiftdrop", "Umathsupshiftup", "Umathsupsubbottommax", "Umathunderbarkern", "Umathunderbarrule", "Umathunderbarvgap", "Umathunderdelimiterbgap", "Umathunderdelimitervgap", "Uoverdelimiter", "Uradical", "Uroot", "Ustack", "Ustartdisplaymath", "Ustartmath", "Ustopdisplaymath", "Ustopmath", "Usubscript", "Usuperscript", "Uunderdelimiter", "alignmark", "aligntab", "attribute", "attributedef", "catcodetable", "clearmarks", "crampeddisplaystyle", "crampedscriptscriptstyle", "crampedscriptstyle", "crampedtextstyle", "fontid", "formatname", "gleaders", "ifabsdim", "ifabsnum", "ifprimitive", "initcatcodetable", "latelua", "luaescapestring", "luastartup", "luatexdatestamp", "luatexrevision", "luatexversion", "mathstyle", "nokerns", "noligs", "outputbox", "pageleftoffset", "pagetopoffset", "postexhyphenchar", "posthyphenchar", "preexhyphenchar", "prehyphenchar", "primitive", "savecatcodetable", "scantextokens", "suppressfontnotfounderror", "suppressifcsnameerror", "suppresslongerror", "suppressoutererror", "synctex" }, + ["luatex"]={ "Uchar", "Udelcode", "Udelcodenum", "Udelimiter", "Udelimiterover", "Udelimiterunder", "Umathaccent", "Umathaxis", "Umathbinbinspacing", "Umathbinclosespacing", "Umathbininnerspacing", "Umathbinopenspacing", "Umathbinopspacing", "Umathbinordspacing", "Umathbinpunctspacing", "Umathbinrelspacing", "Umathchar", "Umathchardef", "Umathcharnum", "Umathclosebinspacing", "Umathcloseclosespacing", "Umathcloseinnerspacing", "Umathcloseopenspacing", "Umathcloseopspacing", "Umathcloseordspacing", "Umathclosepunctspacing", "Umathcloserelspacing", "Umathcode", "Umathcodenum", "Umathconnectoroverlapmin", "Umathfractiondelsize", "Umathfractiondenomdown", "Umathfractiondenomvgap", "Umathfractionnumup", "Umathfractionnumvgap", "Umathfractionrule", "Umathinnerbinspacing", "Umathinnerclosespacing", "Umathinnerinnerspacing", "Umathinneropenspacing", "Umathinneropspacing", "Umathinnerordspacing", "Umathinnerpunctspacing", "Umathinnerrelspacing", "Umathlimitabovebgap", "Umathlimitabovekern", "Umathlimitabovevgap", "Umathlimitbelowbgap", "Umathlimitbelowkern", "Umathlimitbelowvgap", "Umathopbinspacing", "Umathopclosespacing", "Umathopenbinspacing", "Umathopenclosespacing", "Umathopeninnerspacing", "Umathopenopenspacing", "Umathopenopspacing", "Umathopenordspacing", "Umathopenpunctspacing", "Umathopenrelspacing", "Umathoperatorsize", "Umathopinnerspacing", "Umathopopenspacing", "Umathopopspacing", "Umathopordspacing", "Umathoppunctspacing", "Umathoprelspacing", "Umathordbinspacing", "Umathordclosespacing", "Umathordinnerspacing", "Umathordopenspacing", "Umathordopspacing", "Umathordordspacing", "Umathordpunctspacing", "Umathordrelspacing", "Umathoverbarkern", "Umathoverbarrule", "Umathoverbarvgap", "Umathoverdelimiterbgap", "Umathoverdelimitervgap", "Umathpunctbinspacing", "Umathpunctclosespacing", "Umathpunctinnerspacing", "Umathpunctopenspacing", "Umathpunctopspacing", "Umathpunctordspacing", "Umathpunctpunctspacing", "Umathpunctrelspacing", "Umathquad", "Umathradicaldegreeafter", "Umathradicaldegreebefore", "Umathradicaldegreeraise", "Umathradicalkern", "Umathradicalrule", "Umathradicalvgap", "Umathrelbinspacing", "Umathrelclosespacing", "Umathrelinnerspacing", "Umathrelopenspacing", "Umathrelopspacing", "Umathrelordspacing", "Umathrelpunctspacing", "Umathrelrelspacing", "Umathspaceafterscript", "Umathstackdenomdown", "Umathstacknumup", "Umathstackvgap", "Umathsubshiftdown", "Umathsubshiftdrop", "Umathsubsupshiftdown", "Umathsubsupvgap", "Umathsubtopmax", "Umathsupbottommin", "Umathsupshiftdrop", "Umathsupshiftup", "Umathsupsubbottommax", "Umathunderbarkern", "Umathunderbarrule", "Umathunderbarvgap", "Umathunderdelimiterbgap", "Umathunderdelimitervgap", "Uoverdelimiter", "Uradical", "Uroot", "Ustack", "Ustartdisplaymath", "Ustartmath", "Ustopdisplaymath", "Ustopmath", "Usubscript", "Usuperscript", "Uunderdelimiter", "alignmark", "aligntab", "attribute", "attributedef", "catcodetable", "clearmarks", "crampeddisplaystyle", "crampedscriptscriptstyle", "crampedscriptstyle", "crampedtextstyle", "fontid", "formatname", "gleaders", "ifabsdim", "ifabsnum", "ifprimitive", "initcatcodetable", "latelua", "luaescapestring", "luastartup", "luatexdatestamp", "luatexrevision", "luatexversion", "luafunction", "mathstyle", "nokerns", "noligs", "outputbox", "pageleftoffset", "pagetopoffset", "postexhyphenchar", "posthyphenchar", "preexhyphenchar", "prehyphenchar", "primitive", "savecatcodetable", "scantextokens", "suppressfontnotfounderror", "suppressifcsnameerror", "suppresslongerror", "suppressoutererror", "synctex" }, ["omega"]={ "OmegaVersion", "bodydir", "chardp", "charht", "charit", "charwd", "leftghost", "localbrokenpenalty", "localinterlinepenalty", "localleftbox", "localrightbox", "mathdir", "odelcode", "odelimiter", "omathaccent", "omathchar", "omathchardef", "omathcode", "oradical", "pagedir", "pageheight", "pagewidth", "pardir", "rightghost", "textdir" }, ["pdftex"]={ "efcode", "expanded", "ifincsname", "ifpdfabsdim", "ifpdfabsnum", "ifpdfprimitive", "leftmarginkern", "letterspacefont", "lpcode", "pdfadjustspacing", "pdfannot", "pdfcatalog", "pdfcolorstack", "pdfcolorstackinit", "pdfcompresslevel", "pdfcopyfont", "pdfcreationdate", "pdfdecimaldigits", "pdfdest", "pdfdestmargin", "pdfdraftmode", "pdfeachlinedepth", "pdfeachlineheight", "pdfendlink", "pdfendthread", "pdffirstlineheight", "pdffontattr", "pdffontexpand", "pdffontname", "pdffontobjnum", "pdffontsize", "pdfgamma", "pdfgentounicode", "pdfglyphtounicode", "pdfhorigin", "pdfignoreddimen", "pdfimageapplygamma", "pdfimagegamma", "pdfimagehicolor", "pdfimageresolution", "pdfincludechars", "pdfinclusioncopyfonts", "pdfinclusionerrorlevel", "pdfinfo", "pdfinsertht", "pdflastannot", "pdflastlinedepth", "pdflastlink", "pdflastobj", "pdflastxform", "pdflastximage", "pdflastximagecolordepth", "pdflastximagepages", "pdflastxpos", "pdflastypos", "pdflinkmargin", "pdfliteral", "pdfmapfile", "pdfmapline", "pdfminorversion", "pdfnames", "pdfnoligatures", "pdfnormaldeviate", "pdfobj", "pdfobjcompresslevel", "pdfoptionpdfminorversion", "pdfoutline", "pdfoutput", "pdfpageattr", "pdfpagebox", "pdfpageheight", "pdfpageref", "pdfpageresources", "pdfpagesattr", "pdfpagewidth", "pdfpkmode", "pdfpkresolution", "pdfprimitive", "pdfprotrudechars", "pdfpxdimen", "pdfrandomseed", "pdfrefobj", "pdfrefxform", "pdfrefximage", "pdfreplacefont", "pdfrestore", "pdfretval", "pdfsave", "pdfsavepos", "pdfsetmatrix", "pdfsetrandomseed", "pdfstartlink", "pdfstartthread", "pdftexbanner", "pdftexrevision", "pdftexversion", "pdfthread", "pdfthreadmargin", "pdftracingfonts", "pdftrailer", "pdfuniformdeviate", "pdfuniqueresname", "pdfvorigin", "pdfxform", "pdfxformattr", "pdfxformname", "pdfxformresources", "pdfximage", "pdfximagebbox", "quitvmode", "rightmarginkern", "rpcode", "tagcode" }, - ["tex"]={ "-", "/", "AlephVersion", "Alephminorversion", "Alephrevision", "Alephversion", "OmegaVersion", "Omegaminorversion", "Omegarevision", "Omegaversion", "Udelcode", "Udelcodenum", "Udelimiter", "Udelimiterover", "Udelimiterunder", "Umathaccent", "Umathaxis", "Umathbinbinspacing", "Umathbinclosespacing", "Umathbininnerspacing", "Umathbinopenspacing", "Umathbinopspacing", "Umathbinordspacing", "Umathbinpunctspacing", "Umathbinrelspacing", "Umathchar", "Umathchardef", "Umathcharnum", "Umathclosebinspacing", "Umathcloseclosespacing", "Umathcloseinnerspacing", "Umathcloseopenspacing", "Umathcloseopspacing", "Umathcloseordspacing", "Umathclosepunctspacing", "Umathcloserelspacing", "Umathcode", "Umathcodenum", "Umathconnectoroverlapmin", "Umathfractiondelsize", "Umathfractiondenomdown", "Umathfractiondenomvgap", "Umathfractionnumup", "Umathfractionnumvgap", "Umathfractionrule", "Umathinnerbinspacing", "Umathinnerclosespacing", "Umathinnerinnerspacing", "Umathinneropenspacing", "Umathinneropspacing", "Umathinnerordspacing", "Umathinnerpunctspacing", "Umathinnerrelspacing", "Umathlimitabovebgap", "Umathlimitabovekern", "Umathlimitabovevgap", "Umathlimitbelowbgap", "Umathlimitbelowkern", "Umathlimitbelowvgap", "Umathopbinspacing", "Umathopclosespacing", "Umathopenbinspacing", "Umathopenclosespacing", "Umathopeninnerspacing", "Umathopenopenspacing", "Umathopenopspacing", "Umathopenordspacing", "Umathopenpunctspacing", "Umathopenrelspacing", "Umathoperatorsize", "Umathopinnerspacing", "Umathopopenspacing", "Umathopopspacing", "Umathopordspacing", "Umathoppunctspacing", "Umathoprelspacing", "Umathordbinspacing", "Umathordclosespacing", "Umathordinnerspacing", "Umathordopenspacing", "Umathordopspacing", "Umathordordspacing", "Umathordpunctspacing", "Umathordrelspacing", "Umathoverbarkern", "Umathoverbarrule", "Umathoverbarvgap", "Umathoverdelimiterbgap", "Umathoverdelimitervgap", "Umathpunctbinspacing", "Umathpunctclosespacing", "Umathpunctinnerspacing", "Umathpunctopenspacing", "Umathpunctopspacing", "Umathpunctordspacing", "Umathpunctpunctspacing", "Umathpunctrelspacing", "Umathquad", "Umathradicaldegreeafter", "Umathradicaldegreebefore", "Umathradicaldegreeraise", "Umathradicalkern", "Umathradicalrule", "Umathradicalvgap", "Umathrelbinspacing", "Umathrelclosespacing", "Umathrelinnerspacing", "Umathrelopenspacing", "Umathrelopspacing", "Umathrelordspacing", "Umathrelpunctspacing", "Umathrelrelspacing", "Umathspaceafterscript", "Umathstackdenomdown", "Umathstacknumup", "Umathstackvgap", "Umathsubshiftdown", "Umathsubshiftdrop", "Umathsubsupshiftdown", "Umathsubsupvgap", "Umathsubtopmax", "Umathsupbottommin", "Umathsupshiftdrop", "Umathsupshiftup", "Umathsupsubbottommax", "Umathunderbarkern", "Umathunderbarrule", "Umathunderbarvgap", "Umathunderdelimiterbgap", "Umathunderdelimitervgap", "Uoverdelimiter", "Uradical", "Uroot", "Ustack", "Ustartdisplaymath", "Ustartmath", "Ustopdisplaymath", "Ustopmath", "Usubscript", "Usuperscript", "Uunderdelimiter", "above", "abovedisplayshortskip", "abovedisplayskip", "abovewithdelims", "accent", "adjdemerits", "advance", "afterassignment", "aftergroup", "alignmark", "aligntab", "atop", "atopwithdelims", "attribute", "attributedef", "badness", "baselineskip", "batchmode", "begingroup", "belowdisplayshortskip", "belowdisplayskip", "binoppenalty", "bodydir", "botmark", "botmarks", "box", "boxdir", "boxmaxdepth", "brokenpenalty", "catcode", "catcodetable", "char", "chardef", "chardp", "charht", "charit", "charwd", "cleaders", "clearmarks", "closein", "closeout", "clubpenalties", "clubpenalty", "copy", "count", "countdef", "cr", "crampeddisplaystyle", "crampedscriptscriptstyle", "crampedscriptstyle", "crampedtextstyle", "crcr", "csname", "currentgrouplevel", "currentgrouptype", "currentifbranch", "currentiflevel", "currentiftype", "day", "deadcycles", "def", "defaulthyphenchar", "defaultskewchar", "delcode", "delimiter", "delimiterfactor", "delimitershortfall", "detokenize", "dimen", "dimendef", "dimexpr", "directlua", "discretionary", "displayindent", "displaylimits", "displaystyle", "displaywidowpenalties", "displaywidowpenalty", "displaywidth", "divide", "doublehyphendemerits", "dp", "dump", "eTeXVersion", "eTeXminorversion", "eTeXrevision", "eTeXversion", "edef", "efcode", "else", "emergencystretch", "end", "endcsname", "endgroup", "endinput", "endlinechar", "eqno", "errhelp", "errmessage", "errorcontextlines", "errorstopmode", "escapechar", "everycr", "everydisplay", "everyeof", "everyhbox", "everyjob", "everymath", "everypar", "everyvbox", "exhyphenchar", "exhyphenpenalty", "expandafter", "expanded", "fam", "fi", "finalhyphendemerits", "firstmark", "firstmarks", "floatingpenalty", "font", "fontchardp", "fontcharht", "fontcharic", "fontcharwd", "fontdimen", "fontid", "fontname", "formatname", "futurelet", "gdef", "gleaders", "global", "globaldefs", "glueexpr", "glueshrink", "glueshrinkorder", "gluestretch", "gluestretchorder", "gluetomu", "halign", "hangafter", "hangindent", "hbadness", "hbox", "hfil", "hfill", "hfilneg", "hfuzz", "hoffset", "holdinginserts", "hrule", "hsize", "hskip", "hss", "ht", "hyphenation", "hyphenchar", "hyphenpenalty", "if", "ifabsdim", "ifabsnum", "ifcase", "ifcat", "ifcsname", "ifdefined", "ifdim", "ifeof", "iffalse", "iffontchar", "ifhbox", "ifhmode", "ifincsname", "ifinner", "ifmmode", "ifnum", "ifodd", "ifpdfabsdim", "ifpdfabsnum", "ifpdfprimitive", "ifprimitive", "iftrue", "ifvbox", "ifvmode", "ifvoid", "ifx", "ignorespaces", "immediate", "indent", "initcatcodetable", "input", "inputlineno", "insert", "insertpenalties", "interactionmode", "interlinepenalties", "interlinepenalty", "jobname", "kern", "language", "lastbox", "lastkern", "lastlinefit", "lastnodetype", "lastpenalty", "lastskip", "latelua", "lccode", "leaders", "left", "leftghost", "lefthyphenmin", "leftmarginkern", "leftskip", "leqno", "let", "letterspacefont", "limits", "linepenalty", "lineskip", "lineskiplimit", "localbrokenpenalty", "localinterlinepenalty", "localleftbox", "localrightbox", "long", "looseness", "lower", "lowercase", "lpcode", "luaescapestring", "luastartup", "luatexdatestamp", "luatexrevision", "luatexversion", "mag", "mark", "marks", "mathaccent", "mathbin", "mathchar", "mathchardef", "mathchoice", "mathclose", "mathcode", "mathdir", "mathinner", "mathop", "mathopen", "mathord", "mathpunct", "mathrel", "mathstyle", "mathsurround", "maxdeadcycles", "maxdepth", "meaning", "medmuskip", "message", "middle", "mkern", "month", "moveleft", "moveright", "mskip", "muexpr", "multiply", "muskip", "muskipdef", "mutoglue", "newlinechar", "noalign", "noboundary", "noexpand", "noindent", "nokerns", "noligs", "nolimits", "nolocaldirs", "nolocalwhatsits", "nonscript", "nonstopmode", "nulldelimiterspace", "nullfont", "number", "numexpr", "odelcode", "odelimiter", "omathaccent", "omathchar", "omathchardef", "omathcode", "omit", "openin", "openout", "or", "oradical", "outer", "output", "outputbox", "outputpenalty", "over", "overfullrule", "overline", "overwithdelims", "pagebottomoffset", "pagedepth", "pagedir", "pagediscards", "pagefilllstretch", "pagefillstretch", "pagefilstretch", "pagegoal", "pageheight", "pageleftoffset", "pagerightoffset", "pageshrink", "pagestretch", "pagetopoffset", "pagetotal", "pagewidth", "par", "pardir", "parfillskip", "parindent", "parshape", "parshapedimen", "parshapeindent", "parshapelength", "parskip", "patterns", "pausing", "pdfadjustspacing", "pdfannot", "pdfcatalog", "pdfcolorstack", "pdfcolorstackinit", "pdfcompresslevel", "pdfcopyfont", "pdfcreationdate", "pdfdecimaldigits", "pdfdest", "pdfdestmargin", "pdfdraftmode", "pdfeachlinedepth", "pdfeachlineheight", "pdfendlink", "pdfendthread", "pdffirstlineheight", "pdffontattr", "pdffontexpand", "pdffontname", "pdffontobjnum", "pdffontsize", "pdfgamma", "pdfgentounicode", "pdfglyphtounicode", "pdfhorigin", "pdfignoreddimen", "pdfimageapplygamma", "pdfimagegamma", "pdfimagehicolor", "pdfimageresolution", "pdfincludechars", "pdfinclusioncopyfonts", "pdfinclusionerrorlevel", "pdfinfo", "pdfinsertht", "pdflastannot", "pdflastlinedepth", "pdflastlink", "pdflastobj", "pdflastxform", "pdflastximage", "pdflastximagecolordepth", "pdflastximagepages", "pdflastxpos", "pdflastypos", "pdflinkmargin", "pdfliteral", "pdfmapfile", "pdfmapline", "pdfminorversion", "pdfnames", "pdfnoligatures", "pdfnormaldeviate", "pdfobj", "pdfobjcompresslevel", "pdfoptionpdfminorversion", "pdfoutline", "pdfoutput", "pdfpageattr", "pdfpagebox", "pdfpageheight", "pdfpageref", "pdfpageresources", "pdfpagesattr", "pdfpagewidth", "pdfpkmode", "pdfpkresolution", "pdfprimitive", "pdfprotrudechars", "pdfpxdimen", "pdfrandomseed", "pdfrefobj", "pdfrefxform", "pdfrefximage", "pdfreplacefont", "pdfrestore", "pdfretval", "pdfsave", "pdfsavepos", "pdfsetmatrix", "pdfsetrandomseed", "pdfstartlink", "pdfstartthread", "pdftexbanner", "pdftexrevision", "pdftexversion", "pdfthread", "pdfthreadmargin", "pdftracingfonts", "pdftrailer", "pdfuniformdeviate", "pdfuniqueresname", "pdfvorigin", "pdfxform", "pdfxformattr", "pdfxformname", "pdfxformresources", "pdfximage", "pdfximagebbox", "penalty", "postdisplaypenalty", "postexhyphenchar", "posthyphenchar", "predisplaydirection", "predisplaypenalty", "predisplaysize", "preexhyphenchar", "prehyphenchar", "pretolerance", "prevdepth", "prevgraf", "primitive", "protected", "quitvmode", "radical", "raise", "read", "readline", "relax", "relpenalty", "right", "rightghost", "righthyphenmin", "rightmarginkern", "rightskip", "romannumeral", "rpcode", "savecatcodetable", "savinghyphcodes", "savingvdiscards", "scantextokens", "scantokens", "scriptfont", "scriptscriptfont", "scriptscriptstyle", "scriptspace", "scriptstyle", "scrollmode", "setbox", "setlanguage", "sfcode", "shipout", "show", "showbox", "showboxbreadth", "showboxdepth", "showgroups", "showifs", "showlists", "showthe", "showtokens", "skewchar", "skip", "skipdef", "spacefactor", "spaceskip", "span", "special", "splitbotmark", "splitbotmarks", "splitdiscards", "splitfirstmark", "splitfirstmarks", "splitmaxdepth", "splittopskip", "string", "suppressfontnotfounderror", "suppressifcsnameerror", "suppresslongerror", "suppressoutererror", "synctex", "tabskip", "tagcode", "textdir", "textfont", "textstyle", "the", "thickmuskip", "thinmuskip", "time", "toks", "toksdef", "tolerance", "topmark", "topmarks", "topskip", "tracingassigns", "tracingcommands", "tracinggroups", "tracingifs", "tracinglostchars", "tracingmacros", "tracingnesting", "tracingonline", "tracingoutput", "tracingpages", "tracingparagraphs", "tracingrestores", "tracingscantokens", "tracingstats", "uccode", "uchyph", "underline", "unexpanded", "unhbox", "unhcopy", "unkern", "unless", "unpenalty", "unskip", "unvbox", "unvcopy", "uppercase", "vadjust", "valign", "vbadness", "vbox", "vcenter", "vfil", "vfill", "vfilneg", "vfuzz", "voffset", "vrule", "vsize", "vskip", "vsplit", "vss", "vtop", "wd", "widowpenalties", "widowpenalty", "write", "xdef", "xleaders", "xspaceskip", "year" }, + ["tex"]={ "-", "/", "AlephVersion", "Alephminorversion", "Alephrevision", "Alephversion", "OmegaVersion", "Omegaminorversion", "Omegarevision", "Omegaversion", "Udelcode", "Udelcodenum", "Udelimiter", "Udelimiterover", "Udelimiterunder", "Umathaccent", "Umathaxis", "Umathbinbinspacing", "Umathbinclosespacing", "Umathbininnerspacing", "Umathbinopenspacing", "Umathbinopspacing", "Umathbinordspacing", "Umathbinpunctspacing", "Umathbinrelspacing", "Umathchar", "Umathchardef", "Umathcharnum", "Umathclosebinspacing", "Umathcloseclosespacing", "Umathcloseinnerspacing", "Umathcloseopenspacing", "Umathcloseopspacing", "Umathcloseordspacing", "Umathclosepunctspacing", "Umathcloserelspacing", "Umathcode", "Umathcodenum", "Umathconnectoroverlapmin", "Umathfractiondelsize", "Umathfractiondenomdown", "Umathfractiondenomvgap", "Umathfractionnumup", "Umathfractionnumvgap", "Umathfractionrule", "Umathinnerbinspacing", "Umathinnerclosespacing", "Umathinnerinnerspacing", "Umathinneropenspacing", "Umathinneropspacing", "Umathinnerordspacing", "Umathinnerpunctspacing", "Umathinnerrelspacing", "Umathlimitabovebgap", "Umathlimitabovekern", "Umathlimitabovevgap", "Umathlimitbelowbgap", "Umathlimitbelowkern", "Umathlimitbelowvgap", "Umathopbinspacing", "Umathopclosespacing", "Umathopenbinspacing", "Umathopenclosespacing", "Umathopeninnerspacing", "Umathopenopenspacing", "Umathopenopspacing", "Umathopenordspacing", "Umathopenpunctspacing", "Umathopenrelspacing", "Umathoperatorsize", "Umathopinnerspacing", "Umathopopenspacing", "Umathopopspacing", "Umathopordspacing", "Umathoppunctspacing", "Umathoprelspacing", "Umathordbinspacing", "Umathordclosespacing", "Umathordinnerspacing", "Umathordopenspacing", "Umathordopspacing", "Umathordordspacing", "Umathordpunctspacing", "Umathordrelspacing", "Umathoverbarkern", "Umathoverbarrule", "Umathoverbarvgap", "Umathoverdelimiterbgap", "Umathoverdelimitervgap", "Umathpunctbinspacing", "Umathpunctclosespacing", "Umathpunctinnerspacing", "Umathpunctopenspacing", "Umathpunctopspacing", "Umathpunctordspacing", "Umathpunctpunctspacing", "Umathpunctrelspacing", "Umathquad", "Umathradicaldegreeafter", "Umathradicaldegreebefore", "Umathradicaldegreeraise", "Umathradicalkern", "Umathradicalrule", "Umathradicalvgap", "Umathrelbinspacing", "Umathrelclosespacing", "Umathrelinnerspacing", "Umathrelopenspacing", "Umathrelopspacing", "Umathrelordspacing", "Umathrelpunctspacing", "Umathrelrelspacing", "Umathspaceafterscript", "Umathstackdenomdown", "Umathstacknumup", "Umathstackvgap", "Umathsubshiftdown", "Umathsubshiftdrop", "Umathsubsupshiftdown", "Umathsubsupvgap", "Umathsubtopmax", "Umathsupbottommin", "Umathsupshiftdrop", "Umathsupshiftup", "Umathsupsubbottommax", "Umathunderbarkern", "Umathunderbarrule", "Umathunderbarvgap", "Umathunderdelimiterbgap", "Umathunderdelimitervgap", "Uoverdelimiter", "Uradical", "Uroot", "Ustack", "Ustartdisplaymath", "Ustartmath", "Ustopdisplaymath", "Ustopmath", "Usubscript", "Usuperscript", "Uunderdelimiter", "above", "abovedisplayshortskip", "abovedisplayskip", "abovewithdelims", "accent", "adjdemerits", "advance", "afterassignment", "aftergroup", "alignmark", "aligntab", "atop", "atopwithdelims", "attribute", "attributedef", "badness", "baselineskip", "batchmode", "begingroup", "belowdisplayshortskip", "belowdisplayskip", "binoppenalty", "bodydir", "botmark", "botmarks", "box", "boxdir", "boxmaxdepth", "brokenpenalty", "catcode", "catcodetable", "char", "chardef", "cleaders", "clearmarks", "closein", "closeout", "clubpenalties", "clubpenalty", "copy", "count", "countdef", "cr", "crampeddisplaystyle", "crampedscriptscriptstyle", "crampedscriptstyle", "crampedtextstyle", "crcr", "csname", "currentgrouplevel", "currentgrouptype", "currentifbranch", "currentiflevel", "currentiftype", "day", "deadcycles", "def", "defaulthyphenchar", "defaultskewchar", "delcode", "delimiter", "delimiterfactor", "delimitershortfall", "detokenize", "dimen", "dimendef", "dimexpr", "directlua", "discretionary", "displayindent", "displaylimits", "displaystyle", "displaywidowpenalties", "displaywidowpenalty", "displaywidth", "divide", "doublehyphendemerits", "dp", "dump", "eTeXVersion", "eTeXminorversion", "eTeXrevision", "eTeXversion", "edef", "efcode", "else", "emergencystretch", "end", "endcsname", "endgroup", "endinput", "endlinechar", "eqno", "errhelp", "errmessage", "errorcontextlines", "errorstopmode", "escapechar", "everycr", "everydisplay", "everyeof", "everyhbox", "everyjob", "everymath", "everypar", "everyvbox", "exhyphenchar", "exhyphenpenalty", "expandafter", "expanded", "fam", "fi", "finalhyphendemerits", "firstmark", "firstmarks", "floatingpenalty", "font", "fontchardp", "fontcharht", "fontcharic", "fontcharwd", "fontdimen", "fontid", "fontname", "formatname", "futurelet", "gdef", "gleaders", "global", "globaldefs", "glueexpr", "glueshrink", "glueshrinkorder", "gluestretch", "gluestretchorder", "gluetomu", "halign", "hangafter", "hangindent", "hbadness", "hbox", "hfil", "hfill", "hfilneg", "hfuzz", "hoffset", "holdinginserts", "hrule", "hsize", "hskip", "hss", "ht", "hyphenation", "hyphenchar", "hyphenpenalty", "if", "ifabsdim", "ifabsnum", "ifcase", "ifcat", "ifcsname", "ifdefined", "ifdim", "ifeof", "iffalse", "iffontchar", "ifhbox", "ifhmode", "ifincsname", "ifinner", "ifmmode", "ifnum", "ifodd", "ifpdfabsdim", "ifpdfabsnum", "ifpdfprimitive", "ifprimitive", "iftrue", "ifvbox", "ifvmode", "ifvoid", "ifx", "ignorespaces", "immediate", "indent", "initcatcodetable", "input", "inputlineno", "insert", "insertpenalties", "interactionmode", "interlinepenalties", "interlinepenalty", "jobname", "kern", "language", "lastbox", "lastkern", "lastlinefit", "lastnodetype", "lastpenalty", "lastskip", "latelua", "lccode", "leaders", "left", "leftghost", "lefthyphenmin", "leftmarginkern", "leftskip", "leqno", "let", "letterspacefont", "limits", "linepenalty", "lineskip", "lineskiplimit", "localbrokenpenalty", "localinterlinepenalty", "localleftbox", "localrightbox", "long", "looseness", "lower", "lowercase", "lpcode", "luaescapestring", "luastartup", "luatexdatestamp", "luatexrevision", "luatexversion", "mag", "mark", "marks", "mathaccent", "mathbin", "mathchar", "mathchardef", "mathchoice", "mathclose", "mathcode", "mathdir", "mathinner", "mathop", "mathopen", "mathord", "mathpunct", "mathrel", "mathstyle", "mathsurround", "maxdeadcycles", "maxdepth", "meaning", "medmuskip", "message", "middle", "mkern", "month", "moveleft", "moveright", "mskip", "muexpr", "multiply", "muskip", "muskipdef", "mutoglue", "newlinechar", "noalign", "noboundary", "noexpand", "noindent", "nokerns", "noligs", "nolimits", "nolocaldirs", "nolocalwhatsits", "nonscript", "nonstopmode", "nulldelimiterspace", "nullfont", "number", "numexpr", "odelcode", "odelimiter", "omathaccent", "omathchar", "omathchardef", "omathcode", "omit", "openin", "openout", "or", "oradical", "outer", "output", "outputbox", "outputpenalty", "over", "overfullrule", "overline", "overwithdelims", "pagebottomoffset", "pagedepth", "pagedir", "pagediscards", "pagefilllstretch", "pagefillstretch", "pagefilstretch", "pagegoal", "pageheight", "pageleftoffset", "pagerightoffset", "pageshrink", "pagestretch", "pagetopoffset", "pagetotal", "pagewidth", "par", "pardir", "parfillskip", "parindent", "parshape", "parshapedimen", "parshapeindent", "parshapelength", "parskip", "patterns", "pausing", "pdfadjustspacing", "pdfannot", "pdfcatalog", "pdfcolorstack", "pdfcolorstackinit", "pdfcompresslevel", "pdfcopyfont", "pdfcreationdate", "pdfdecimaldigits", "pdfdest", "pdfdestmargin", "pdfdraftmode", "pdfeachlinedepth", "pdfeachlineheight", "pdfendlink", "pdfendthread", "pdffirstlineheight", "pdffontattr", "pdffontexpand", "pdffontname", "pdffontobjnum", "pdffontsize", "pdfgamma", "pdfgentounicode", "pdfglyphtounicode", "pdfhorigin", "pdfignoreddimen", "pdfimageapplygamma", "pdfimagegamma", "pdfimagehicolor", "pdfimageresolution", "pdfincludechars", "pdfinclusioncopyfonts", "pdfinclusionerrorlevel", "pdfinfo", "pdfinsertht", "pdflastannot", "pdflastlinedepth", "pdflastlink", "pdflastobj", "pdflastxform", "pdflastximage", "pdflastximagecolordepth", "pdflastximagepages", "pdflastxpos", "pdflastypos", "pdflinkmargin", "pdfliteral", "pdfmapfile", "pdfmapline", "pdfminorversion", "pdfnames", "pdfnoligatures", "pdfnormaldeviate", "pdfobj", "pdfobjcompresslevel", "pdfoptionpdfminorversion", "pdfoutline", "pdfoutput", "pdfpageattr", "pdfpagebox", "pdfpageheight", "pdfpageref", "pdfpageresources", "pdfpagesattr", "pdfpagewidth", "pdfpkmode", "pdfpkresolution", "pdfprimitive", "pdfprotrudechars", "pdfpxdimen", "pdfrandomseed", "pdfrefobj", "pdfrefxform", "pdfrefximage", "pdfreplacefont", "pdfrestore", "pdfretval", "pdfsave", "pdfsavepos", "pdfsetmatrix", "pdfsetrandomseed", "pdfstartlink", "pdfstartthread", "pdftexbanner", "pdftexrevision", "pdftexversion", "pdfthread", "pdfthreadmargin", "pdftracingfonts", "pdftrailer", "pdfuniformdeviate", "pdfuniqueresname", "pdfvorigin", "pdfxform", "pdfxformattr", "pdfxformname", "pdfxformresources", "pdfximage", "pdfximagebbox", "penalty", "postdisplaypenalty", "postexhyphenchar", "posthyphenchar", "predisplaydirection", "predisplaypenalty", "predisplaysize", "preexhyphenchar", "prehyphenchar", "pretolerance", "prevdepth", "prevgraf", "primitive", "protected", "quitvmode", "radical", "raise", "read", "readline", "relax", "relpenalty", "right", "rightghost", "righthyphenmin", "rightmarginkern", "rightskip", "romannumeral", "rpcode", "savecatcodetable", "savinghyphcodes", "savingvdiscards", "scantextokens", "scantokens", "scriptfont", "scriptscriptfont", "scriptscriptstyle", "scriptspace", "scriptstyle", "scrollmode", "setbox", "setlanguage", "sfcode", "shipout", "show", "showbox", "showboxbreadth", "showboxdepth", "showgroups", "showifs", "showlists", "showthe", "showtokens", "skewchar", "skip", "skipdef", "spacefactor", "spaceskip", "span", "special", "splitbotmark", "splitbotmarks", "splitdiscards", "splitfirstmark", "splitfirstmarks", "splitmaxdepth", "splittopskip", "string", "suppressfontnotfounderror", "suppressifcsnameerror", "suppresslongerror", "suppressoutererror", "synctex", "tabskip", "tagcode", "textdir", "textfont", "textstyle", "the", "thickmuskip", "thinmuskip", "time", "toks", "toksdef", "tolerance", "topmark", "topmarks", "topskip", "tracingassigns", "tracingcommands", "tracinggroups", "tracingifs", "tracinglostchars", "tracingmacros", "tracingnesting", "tracingonline", "tracingoutput", "tracingpages", "tracingparagraphs", "tracingrestores", "tracingscantokens", "tracingstats", "uccode", "uchyph", "underline", "unexpanded", "unhbox", "unhcopy", "unkern", "unless", "unpenalty", "unskip", "unvbox", "unvcopy", "uppercase", "vadjust", "valign", "vbadness", "vbox", "vcenter", "vfil", "vfill", "vfilneg", "vfuzz", "voffset", "vrule", "vsize", "vskip", "vsplit", "vss", "vtop", "wd", "widowpenalties", "widowpenalty", "write", "xdef", "xleaders", "xspaceskip", "year" }, ["xetex"]={ "XeTeXversion" }, }
\ No newline at end of file diff --git a/context/data/scite/context/lexers/lexer.lua b/context/data/scite/context/lexers/lexer.lua new file mode 100644 index 000000000..9582f6a76 --- /dev/null +++ b/context/data/scite/context/lexers/lexer.lua @@ -0,0 +1,3 @@ +-- this works ok: + +return require("scite-context-lexer") diff --git a/context/data/scite/context/lexers/scite-context-lexer-bibtex.lua b/context/data/scite/context/lexers/scite-context-lexer-bibtex.lua new file mode 100644 index 000000000..88b070e5e --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer-bibtex.lua @@ -0,0 +1,176 @@ +local info = { + version = 1.002, + comment = "scintilla lpeg lexer for bibtex", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +local global, string, table, lpeg = _G, string, table, lpeg +local P, R, S, V = lpeg.P, lpeg.R, lpeg.S, lpeg.V +local type = type + +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns + +local token = lexer.token +local exact_match = lexer.exact_match + +local bibtexlexer = lexer.new("xml","scite-context-lexer-xml") +local whitespace = bibtexlexer.whitespace + + local escape, left, right = P("\\"), P('{'), P('}') + + patterns.balanced = P { + [1] = ((escape * (left+right)) + (1 - (left+right)) + V(2))^0, + [2] = left * V(1) * right + } + +-- taken from bibl-bib.lua + +local anything = patterns.anything +local percent = P("%") +local start = P("@") +local comma = P(",") +local hash = P("#") +local escape = P("\\") +local single = P("'") +local double = P('"') +local left = P('{') +local right = P('}') +local lineending = S("\n\r") +local space = S(" \t\n\r\f") +local spaces = space^1 +local equal = P("=") + +local keyword = (R("az","AZ","09") + S("@_:-"))^1 +local s_quoted = ((escape*single) + spaces^1 + (1-single))^0 +local d_quoted = ((escape*double) + spaces^1 + (1-double))^0 +local balanced = patterns.balanced + +local t_spacing = token(whitespace, space^1) +local t_optionalws = token("default", space^1)^0 + +local t_equal = token("operator",equal) +local t_left = token("grouping",left) +local t_right = token("grouping",right) +local t_comma = token("operator",comma) +local t_hash = token("operator",hash) + +local t_s_value = token("operator",single) + * token("text",s_quoted) + * token("operator",single) +local t_d_value = token("operator",double) + * token("text",d_quoted) + * token("operator",double) +local t_b_value = token("operator",left) + * token("text",balanced) + * token("operator",right) +local t_r_value = token("text",keyword) + +local t_keyword = token("keyword",keyword) +local t_key = token("command",keyword) +local t_label = token("warning",keyword) + +local t_somevalue = t_s_value + t_d_value + t_b_value + t_r_value +local t_value = t_somevalue + * ((t_optionalws * t_hash * t_optionalws) * t_somevalue)^0 + +local t_assignment = t_optionalws + * t_key + * t_optionalws + * t_equal + * t_optionalws + * t_value + +local t_shortcut = t_keyword + * t_optionalws + * t_left + * t_optionalws + * (t_assignment * t_comma^0)^0 + * t_optionalws + * t_right + +local t_definition = t_keyword + * t_optionalws + * t_left + * t_optionalws + * t_label + * t_optionalws + * t_comma + * (t_assignment * t_comma^0)^0 + * t_optionalws + * t_right + +local t_comment = t_keyword + * t_optionalws + * t_left + * token("text",(1-t_right)^0) + * t_optionalws + * t_right + +local t_forget = token("comment",percent^1 * (1-lineending)^0) + +local t_rest = token("default",anything) + +-- this kind of lexing seems impossible as the size of the buffer passed to the lexer is not +-- large enough .. but we can cheat and use this: +-- +-- function OnOpen(filename) editor:Colourise(1,editor.TextLength) end -- or is it 0? + +bibtexlexer._rules = { + { "whitespace", t_spacing }, + { "forget", t_forget }, + { "shortcut", t_shortcut }, + { "definition", t_definition }, + { "comment", t_comment }, + { "rest", t_rest }, +} + +-- local t_assignment = t_key +-- * t_optionalws +-- * t_equal +-- * t_optionalws +-- * t_value +-- +-- local t_shortcut = t_keyword +-- * t_optionalws +-- * t_left +-- +-- local t_definition = t_keyword +-- * t_optionalws +-- * t_left +-- * t_optionalws +-- * t_label +-- * t_optionalws +-- * t_comma +-- +-- bibtexlexer._rules = { +-- { "whitespace", t_spacing }, +-- { "assignment", t_assignment }, +-- { "definition", t_definition }, +-- { "shortcut", t_shortcut }, +-- { "right", t_right }, +-- { "comma", t_comma }, +-- { "forget", t_forget }, +-- { "comment", t_comment }, +-- { "rest", t_rest }, +-- } + +bibtexlexer._tokenstyles = context.styleset + +bibtexlexer._foldpattern = P("{") + P("}") + +bibtexlexer._foldsymbols = { + _patterns = { + "{", + "}", + }, + ["grouping"] = { + ["{"] = 1, + ["}"] = -1, + }, +} + +return bibtexlexer diff --git a/context/data/scite/lexers/scite-context-lexer-cld.lua b/context/data/scite/context/lexers/scite-context-lexer-cld.lua index 1e30c18a2..3442a195c 100644 --- a/context/data/scite/lexers/scite-context-lexer-cld.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-cld.lua @@ -6,13 +6,14 @@ local info = { license = "see context related readme files", } -local lexer = lexer +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns -local cldlexer = { _NAME = "cld", _FILENAME = "scite-context-lexer-cld" } -local whitespace = lexer.WHITESPACE -- maybe we need to fix this -local context = lexer.context +local cldlexer = lexer.new("cld","scite-context-lexer-cld") +local lualexer = lexer.load("scite-context-lexer-lua") -local lualexer = lexer.load('scite-context-lexer-lua') +-- can probably be done nicer now, a bit of a hack cldlexer._rules = lualexer._rules_cld cldlexer._tokenstyles = lualexer._tokenstyles diff --git a/context/data/scite/context/lexers/scite-context-lexer-cpp-web.lua b/context/data/scite/context/lexers/scite-context-lexer-cpp-web.lua new file mode 100644 index 000000000..daa9221ba --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer-cpp-web.lua @@ -0,0 +1,23 @@ +local info = { + version = 1.002, + comment = "scintilla lpeg lexer for cpp web", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns + +local cppweblexer = lexer.new("cpp-web","scite-context-lexer-cpp") +local cpplexer = lexer.load("scite-context-lexer-cpp") + +-- can probably be done nicer now, a bit of a hack + +cppweblexer._rules = cpplexer._rules_web +cppweblexer._tokenstyles = cpplexer._tokenstyles +cppweblexer._foldsymbols = cpplexer._foldsymbols +cppweblexer._directives = cpplexer._directives + +return cppweblexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-cpp.lua b/context/data/scite/context/lexers/scite-context-lexer-cpp.lua new file mode 100644 index 000000000..31180e6a5 --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer-cpp.lua @@ -0,0 +1,188 @@ +local info = { + version = 1.002, + comment = "scintilla lpeg lexer for cpp", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +-- looks liks the original cpp lexer but web ready (so nothing special here yet) + +local P, R, S = lpeg.P, lpeg.R, lpeg.S + +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns + +local token = lexer.token +local exact_match = lexer.exact_match + +local cpplexer = lexer.new("cpp","scite-context-lexer-cpp") +local whitespace = cpplexer.whitespace + +local keywords = { -- copied from cpp.lua + -- c + "asm", "auto", "break", "case", "const", "continue", "default", "do", "else", + "extern", "false", "for", "goto", "if", "inline", "register", "return", + "sizeof", "static", "switch", "true", "typedef", "volatile", "while", + "restrict", + -- hm + "_Bool", "_Complex", "_Pragma", "_Imaginary", + -- c++. + "catch", "class", "const_cast", "delete", "dynamic_cast", "explicit", + "export", "friend", "mutable", "namespace", "new", "operator", "private", + "protected", "public", "signals", "slots", "reinterpret_cast", + "static_assert", "static_cast", "template", "this", "throw", "try", "typeid", + "typename", "using", "virtual" +} + +local datatypes = { -- copied from cpp.lua + "bool", "char", "double", "enum", "float", "int", "long", "short", "signed", + "struct", "union", "unsigned", "void" +} + +local macros = { -- copied from cpp.lua + "define", "elif", "else", "endif", "error", "if", "ifdef", "ifndef", "import", + "include", "line", "pragma", "undef", "using", "warning" +} + +local space = patterns.space -- S(" \n\r\t\f\v") +local any = patterns.any +local restofline = patterns.restofline +local startofline = patterns.startofline + +local squote = P("'") +local dquote = P('"') +local period = P(".") +local escaped = P("\\") * P(1) +local slashes = P("//") +local begincomment = P("/*") +local endcomment = P("*/") +local percent = P("%") + +local hexadecimal = patterns.hexadecimal +local decimal = patterns.decimal +local float = patterns.float +local integer = P("-")^-1 * (hexadecimal + decimal) -- also in patterns ? + +local spacing = token(whitespace, space^1) +local rest = token("default", any) + +local shortcomment = token("comment", slashes * restofline^0) +local longcomment = token("comment", begincomment * (1-endcomment)^0 * endcomment^-1) + +local shortstring = token("quote", dquote) -- can be shared + * token("string", (escaped + (1-dquote))^0) + * token("quote", dquote) + + token("quote", squote) + * token("string", (escaped + (1-squote))^0) + * token("quote", squote) + +local number = token("number", float + integer) + +local validword = R("AZ","az","__") * R("AZ","az","__","09")^0 +local identifier = token("default",validword) + +local operator = token("special", S("+-*/%^!=<>;:{}[]().&|?~")) + +----- optionalspace = spacing^0 + +local p_keywords = exact_match(keywords ) +local p_datatypes = exact_match(datatypes) +local p_macros = exact_match(macros) + +local keyword = token("keyword", p_keywords) +local datatype = token("keyword", p_datatypes) +local identifier = token("default", validword) + +local macro = token("data", #P("#") * startofline * P("#") * S("\t ")^0 * p_macros) + +cpplexer._rules = { + { "whitespace", spacing }, + { "keyword", keyword }, + { "type", datatype }, + { "identifier", identifier }, + { "string", shortstring }, + { "longcomment", longcomment }, + { "shortcomment", shortcomment }, + { "number", number }, + { "macro", macro }, + { "operator", operator }, + { "rest", rest }, +} + +local web = lexer.loadluafile("scite-context-lexer-web-snippets") + +if web then + + lexer.inform("supporting web snippets in cpp lexer") + + cpplexer._rules_web = { + { "whitespace", spacing }, + { "keyword", keyword }, + { "type", datatype }, + { "identifier", identifier }, + { "string", shortstring }, + { "longcomment", longcomment }, + { "shortcomment", shortcomment }, + { "web", web.pattern }, + { "number", number }, + { "macro", macro }, + { "operator", operator }, + { "rest", rest }, + } + +else + + lexer.report("not supporting web snippets in cpp lexer") + + cpplexer._rules_web = { + { "whitespace", spacing }, + { "keyword", keyword }, + { "type", datatype }, + { "identifier", identifier }, + { "string", shortstring }, + { "longcomment", longcomment }, + { "shortcomment", shortcomment }, + { "number", number }, + { "macro", macro }, + { "operator", operator }, + { "rest", rest }, + } + +end + +cpplexer._tokenstyles = context.styleset + +cpplexer._foldpattern = P("/*") + P("*/") + S("{}") -- separate entry else interference + +cpplexer._foldsymbols = { + _patterns = { + "[{}]", + "/%*", + "%*/", + }, + -- ["data"] = { -- macro + -- ["region"] = 1, + -- ["endregion"] = -1, + -- ["if"] = 1, + -- ["ifdef"] = 1, + -- ["ifndef"] = 1, + -- ["endif"] = -1, + -- }, + ["special"] = { -- operator + ["{"] = 1, + ["}"] = -1, + }, + ["comment"] = { + ["/*"] = 1, + ["*/"] = -1, + } +} + +-- -- by indentation: + +cpplexer._foldpatterns = nil +cpplexer._foldsymbols = nil + +return cpplexer diff --git a/context/data/scite/lexers/scite-context-lexer-lua-longstring.lua b/context/data/scite/context/lexers/scite-context-lexer-lua-longstring.lua index fdec301be..855adbe4e 100644 --- a/context/data/scite/lexers/scite-context-lexer-lua-longstring.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-lua-longstring.lua @@ -1,20 +1,21 @@ local info = { version = 1.002, - comment = "scintilla lpeg lexer for lua", + comment = "scintilla lpeg lexer for lua longstrings", author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", copyright = "PRAGMA ADE / ConTeXt Development Team", license = "see context related readme files", } -local lexer = lexer -local token = lexer.token -local P = lpeg.P - -local stringlexer = { _NAME = "lua-longstring", _FILENAME = "scite-context-lexer-lua-longstring" } -local whitespace = lexer.WHITESPACE +local lexer = require("lexer") -- require("scite-context-lexer") local context = lexer.context +local patterns = context.patterns + +local token = lexer.token + +local stringlexer = lexer.new("lua-longstring","scite-context-lexer-lua-longstring") +local whitespace = stringlexer.whitespace -local space = lexer.space +local space = patterns.space local nospace = 1 - space local p_spaces = token(whitespace, space ^1) @@ -25,6 +26,6 @@ stringlexer._rules = { { "string", p_string }, } -stringlexer._tokenstyles = lexer.context.styleset +stringlexer._tokenstyles = context.styleset return stringlexer diff --git a/context/data/scite/lexers/scite-context-lexer-lua.lua b/context/data/scite/context/lexers/scite-context-lexer-lua.lua index 4c276b1bb..c44d586ba 100644 --- a/context/data/scite/lexers/scite-context-lexer-lua.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-lua.lua @@ -6,58 +6,68 @@ local info = { license = "see context related readme files", } --- todo: _G.print (keep _G colored) - -if not lexer._CONTEXTEXTENSIONS then require("scite-context-lexer") end +-- beware: all multiline is messy, so even if it's no lexer, it should be an embedded lexer +-- we probably could use a local whitespace variant but this is cleaner -local lexer = lexer -local token, style, colors, exact_match, no_style = lexer.token, lexer.style, lexer.colors, lexer.exact_match, lexer.style_nothing -local P, R, S, C, Cg, Cb, Cs, Cmt = lpeg.P, lpeg.R, lpeg.S, lpeg.C, lpeg.Cg, lpeg.Cb, lpeg.Cs, lpeg.Cmt +local P, R, S, C, Cmt, Cp = lpeg.P, lpeg.R, lpeg.S, lpeg.C, lpeg.Cmt, lpeg.Cp local match, find = string.match, string.find local setmetatable = setmetatable --- beware: all multiline is messy, so even if it's no lexer, it should be an embedded lexer --- we probably could use a local whitespace variant but this is cleaner - -local lualexer = { _NAME = "lua", _FILENAME = "scite-context-lexer-lua" } -local whitespace = lexer.WHITESPACE +local lexer = require("lexer") local context = lexer.context +local patterns = context.patterns + +local token = lexer.token +local exact_match = lexer.exact_match +local just_match = lexer.just_match + +local lualexer = lexer.new("lua","scite-context-lexer-lua") +local whitespace = lualexer.whitespace local stringlexer = lexer.load("scite-context-lexer-lua-longstring") -local directives = { } -- communication channel +local directives = { } -- communication channel -- this will be extended +-- we could combine some in a hash that returns the class that then makes the token +-- this can save time on large files + local keywords = { - 'and', 'break', 'do', 'else', 'elseif', 'end', 'false', 'for', 'function', -- 'goto', - 'if', 'in', 'local', 'nil', 'not', 'or', 'repeat', 'return', 'then', 'true', - 'until', 'while', + "and", "break", "do", "else", "elseif", "end", "false", "for", "function", -- "goto", + "if", "in", "local", "nil", "not", "or", "repeat", "return", "then", "true", + "until", "while", } local functions = { - 'assert', 'collectgarbage', 'dofile', 'error', 'getmetatable', - 'ipairs', 'load', 'loadfile', 'module', 'next', 'pairs', - 'pcall', 'print', 'rawequal', 'rawget', 'rawset', 'require', - 'setmetatable', 'tonumber', 'tostring', 'type', 'unpack', 'xpcall', 'select', + "assert", "collectgarbage", "dofile", "error", "getmetatable", + "ipairs", "load", "loadfile", "module", "next", "pairs", + "pcall", "print", "rawequal", "rawget", "rawset", "require", + "setmetatable", "tonumber", "tostring", "type", "unpack", "xpcall", "select", "string", "table", "coroutine", "debug", "file", "io", "lpeg", "math", "os", "package", "bit32", } local constants = { - '_G', '_VERSION', '_M', '...', '_ENV', + "_G", "_VERSION", "_M", "...", "_ENV", -- here too - '__add', '__call', '__concat', '__div', '__idiv', '__eq', '__gc', '__index', - '__le', '__lt', '__metatable', '__mode', '__mul', '__newindex', - '__pow', '__sub', '__tostring', '__unm', '__len', - '__pairs', '__ipairs', - 'NaN', + "__add", "__call", "__concat", "__div", "__idiv", "__eq", "__gc", "__index", + "__le", "__lt", "__metatable", "__mode", "__mul", "__newindex", + "__pow", "__sub", "__tostring", "__unm", "__len", + "__pairs", "__ipairs", + "NaN", } +-- local tokenmappings = { } +-- +-- for i=1,#keywords do tokenmappings[keywords [i]] = "keyword" } +-- for i=1,#functions do tokenmappings[functions[i]] = "function" } +-- for i=1,#constants do tokenmappings[constants[i]] = "constant" } + local internals = { -- __ - 'add', 'call', 'concat', 'div', 'eq', 'gc', 'index', - 'le', 'lt', 'metatable', 'mode', 'mul', 'newindex', - 'pow', 'sub', 'tostring', 'unm', 'len', + "add", "call", "concat", "div", "eq", "gc", "index", + "le", "lt", "metatable", "mode", "mul", "newindex", + "pow", "sub", "tostring", "unm", "len", } local depricated = { @@ -67,7 +77,9 @@ local depricated = { } local csnames = { -- todo: option + "commands", "context", + "ctx", "metafun", "metapost", } @@ -81,14 +93,14 @@ local longonestart = P("[[") local longonestop = P("]]") local longonestring = (1-longonestop)^0 -local longtwostart = P('[') * Cmt(equals,setlevel) * P('[') -local longtwostop = P(']') * equals * P(']') +local longtwostart = P("[") * Cmt(equals,setlevel) * P("[") +local longtwostop = P("]") * equals * P("]") local sentinels = { } setmetatable(sentinels, { __index = function(t,k) local v = "]" .. k .. "]" t[k] = v return v end }) local longtwostring = P(function(input,index) if level then - -- local sentinel = ']' .. level .. ']' + -- local sentinel = "]" .. level .. "]" local sentinel = sentinels[level] local _, stop = find(input,sentinel,index,true) return stop and stop + 1 - #sentinel or #input + 1 @@ -99,32 +111,33 @@ end) local longtwostring_end = P(function(input,index) if level then - -- local sentinel = ']' .. level .. ']' + -- local sentinel = "]" .. level .. "]" local sentinel = sentinels[level] local _, stop = find(input,sentinel,index,true) return stop and stop + 1 or #input + 1 end end) -local longcomment = Cmt(#('[[' + ('[' * C(equals) * '[')), function(input,index,level) - -- local sentinel = ']' .. level .. ']' +local longcomment = Cmt(#("[[" + ("[" * C(equals) * "[")), function(input,index,level) + -- local sentinel = "]" .. level .. "]" local sentinel = sentinels[level] local _, stop = find(input,sentinel,index,true) return stop and stop + 1 or #input + 1 end) -local space = lexer.space -- S(" \n\r\t\f\v") -local any = lexer.any +local space = patterns.space -- S(" \n\r\t\f\v") +local any = patterns.any +local eol = patterns.eol local squote = P("'") local dquote = P('"') local escaped = P("\\") * P(1) -local dashes = P('--') +local dashes = P("--") local spacing = token(whitespace, space^1) local rest = token("default", any) -local shortcomment = token("comment", dashes * lexer.nonnewline^0) +local shortcomment = token("comment", dashes * (1-eol)^0) local longcomment = token("comment", dashes * longcomment) -- fails on very long string with \ at end of lines (needs embedded lexer) @@ -149,21 +162,23 @@ local string = shortstring lexer.embed_lexer(lualexer, stringlexer, token("quote",longtwostart), token("string",longtwostring_body) * token("quote",longtwostring_end)) -local integer = P("-")^-1 * (lexer.hex_num + lexer.dec_num) -local number = token("number", lexer.float + integer) +local integer = P("-")^-1 * (patterns.hexadecimal + patterns.decimal) +local number = token("number", patterns.float + integer) -- officially 127-255 are ok but not utf so useless -local validword = R("AZ","az","__") * R("AZ","az","__","09")^0 +----- validword = R("AZ","az","__") * R("AZ","az","__","09")^0 local utf8character = P(1) * R("\128\191")^1 local validword = (R("AZ","az","__") + utf8character) * (R("AZ","az","__","09") + utf8character)^0 +local validsuffix = (R("AZ","az") + utf8character) * (R("AZ","az","__","09") + utf8character)^0 local identifier = token("default",validword) ----- operator = token("special", P('..') + P('~=') + S('+-*/%^#=<>;:,.{}[]()')) -- maybe split off {}[]() ----- operator = token("special", S('+-*/%^#=<>;:,{}[]()') + P('..') + P('.') + P('~=') ) -- maybe split off {}[]() -local operator = token("special", S('+-*/%^#=<>;:,{}[]().') + P('~=') ) -- no ^1 because of nested lexers +----- operator = token("special", S('+-*/%^#=<>;:,{}[]().') + P('~=') ) -- no ^1 because of nested lexers +local operator = token("special", S('+-*/%^#=<>;:,{}[]().|~')) -- no ^1 because of nested lexers local structure = token("special", S('{}[]()')) @@ -182,8 +197,7 @@ local p_functions = exact_match(functions) local p_constants = exact_match(constants) local p_internals = P("__") * exact_match(internals) -local p_csnames = exact_match(csnames) - +local p_csnames = just_match(csnames) local keyword = token("keyword", p_keywords) local builtin = token("plain", p_functions) local constant = token("data", p_constants) @@ -191,8 +205,10 @@ local internal = token("data", p_internals) local csname = token("user", p_csnames) * ( optionalspace * hasargument - + ( optionalspace * token("special", S(".:")) * optionalspace * token("user", validword) )^1 + + ( optionalspace * token("special", S(".:")) * optionalspace * token("user", validword ) )^1 + + token("user", P("_") * validsuffix) ) + local identifier = token("default", validword) * ( optionalspace * token("special", S(".:")) * optionalspace * ( token("warning", p_keywords) + @@ -200,22 +216,33 @@ local identifier = token("default", validword) token("default", validword ) ) )^0 +-- local t = { } for k, v in next, tokenmappings do t[#t+1] = k end t = table.concat(t) +-- -- local experimental = (S(t)^1) / function(s) return tokenmappings[s] end * Cp() +-- +-- local experimental = Cmt(S(t)^1, function(_,i,s) +-- local t = tokenmappings[s] +-- if t then +-- return true, t, i +-- end +-- end) + lualexer._rules = { - { 'whitespace', spacing }, - { 'keyword', keyword }, - -- { 'structure', structure }, - { 'function', builtin }, - { 'csname', csname }, - { 'constant', constant }, - { 'goto', gotokeyword }, - { 'identifier', identifier }, - { 'string', string }, - { 'number', number }, - { 'longcomment', longcomment }, - { 'shortcomment', shortcomment }, - { 'label', gotolabel }, - { 'operator', operator }, - { 'rest', rest }, + { "whitespace", spacing }, + { "keyword", keyword }, -- can be combined + -- { "structure", structure }, + { "function", builtin }, -- can be combined + { "constant", constant }, -- can be combined + -- { "experimental", experimental }, -- works but better split + { "csname", csname }, + { "goto", gotokeyword }, + { "identifier", identifier }, + { "string", string }, + { "number", number }, + { "longcomment", longcomment }, + { "shortcomment", shortcomment }, + { "label", gotolabel }, + { "operator", operator }, + { "rest", rest }, } -- -- experiment @@ -250,18 +277,18 @@ lualexer._rules = { -- } -- -- lualexer._rules = { --- { 'whitespace', spacing }, --- { 'whatever', whatever }, --- { 'csname', csname }, --- { 'goto', gotokeyword }, --- { 'identifier', identifier }, --- { 'string', string }, --- { 'number', number }, --- { 'longcomment', longcomment }, --- { 'shortcomment', shortcomment }, --- { 'label', gotolabel }, --- { 'operator', operator }, --- { 'rest', rest }, +-- { "whitespace", spacing }, +-- { "whatever", whatever }, +-- { "csname", csname }, +-- { "goto", gotokeyword }, +-- { "identifier", identifier }, +-- { "string", string }, +-- { "number", number }, +-- { "longcomment", longcomment }, +-- { "shortcomment", shortcomment }, +-- { "label", gotolabel }, +-- { "operator", operator }, +-- { "rest", rest }, -- } lualexer._tokenstyles = context.styleset @@ -273,26 +300,26 @@ lualexer._foldpattern = (P("end") + P("if") + P("do") + P("function") + P("repea lualexer._foldsymbols = { _patterns = { - '[a-z][a-z]+', - '[{}%[%]]', + "[a-z][a-z]+", + "[{}%[%]]", }, - ['keyword'] = { -- challenge: if=0 then=1 else=-1 elseif=-1 - ['if'] = 1, -- if .. [then|else] .. end - ['do'] = 1, -- [while] do .. end - ['function'] = 1, -- function .. end - ['repeat'] = 1, -- repeat .. until - ['until'] = -1, - ['end'] = -1, + ["keyword"] = { -- challenge: if=0 then=1 else=-1 elseif=-1 + ["if"] = 1, -- if .. [then|else] .. end + ["do"] = 1, -- [while] do .. end + ["function"] = 1, -- function .. end + ["repeat"] = 1, -- repeat .. until + ["until"] = -1, + ["end"] = -1, }, - ['comment'] = { - ['['] = 1, [']'] = -1, + ["comment"] = { + ["["] = 1, ["]"] = -1, }, - -- ['quote'] = { -- confusing - -- ['['] = 1, [']'] = -1, + -- ["quote"] = { -- confusing + -- ["["] = 1, ["]"] = -1, -- }, - ['special'] = { - -- ['('] = 1, [')'] = -1, - ['{'] = 1, ['}'] = -1, + ["special"] = { + -- ["("] = 1, [")"] = -1, + ["{"] = 1, ["}"] = -1, }, } @@ -300,9 +327,9 @@ lualexer._foldsymbols = { local cstoken = R("az","AZ","\127\255") + S("@!?_") local texcsname = P("\\") * cstoken^1 -local commentline = P('%') * (1-S("\n\r"))^0 +local commentline = P("%") * (1-S("\n\r"))^0 -local texcomment = token('comment', Cmt(commentline, function() return directives.cld_inline end)) +local texcomment = token("comment", Cmt(commentline, function() return directives.cld_inline end)) local longthreestart = P("\\!!bs") local longthreestop = P("\\!!es") @@ -312,7 +339,7 @@ local texstring = token("quote", longthreestart) * token("string", longthreestring) * token("quote", longthreestop) --- local texcommand = token("user", texcsname) +----- texcommand = token("user", texcsname) local texcommand = token("warning", texcsname) -- local texstring = token("quote", longthreestart) @@ -325,22 +352,22 @@ local texcommand = token("warning", texcsname) lualexer._directives = directives lualexer._rules_cld = { - { 'whitespace', spacing }, - { 'texstring', texstring }, - { 'texcomment', texcomment }, - { 'texcommand', texcommand }, - -- { 'structure', structure }, - { 'keyword', keyword }, - { 'function', builtin }, - { 'csname', csname }, - { 'constant', constant }, - { 'identifier', identifier }, - { 'string', string }, - { 'longcomment', longcomment }, - { 'shortcomment', shortcomment }, -- should not be used inline so best signal it as comment (otherwise complex state till end of inline) - { 'number', number }, - { 'operator', operator }, - { 'rest', rest }, + { "whitespace", spacing }, + { "texstring", texstring }, + { "texcomment", texcomment }, + { "texcommand", texcommand }, + -- { "structure", structure }, + { "keyword", keyword }, + { "function", builtin }, + { "csname", csname }, + { "constant", constant }, + { "identifier", identifier }, + { "string", string }, + { "longcomment", longcomment }, + { "shortcomment", shortcomment }, -- should not be used inline so best signal it as comment (otherwise complex state till end of inline) + { "number", number }, + { "operator", operator }, + { "rest", rest }, } return lualexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-mps.lua b/context/data/scite/context/lexers/scite-context-lexer-mps.lua new file mode 100644 index 000000000..b87ea83cb --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer-mps.lua @@ -0,0 +1,177 @@ +local info = { + version = 1.002, + comment = "scintilla lpeg lexer for metafun", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +local global, string, table, lpeg = _G, string, table, lpeg +local P, R, S, V = lpeg.P, lpeg.R, lpeg.S, lpeg.V +local type = type + +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns + +local token = lexer.token +local exact_match = lexer.exact_match + +local metafunlexer = lexer.new("mps","scite-context-lexer-mps") +local whitespace = metafunlexer.whitespace + +local metapostprimitives = { } +local metapostinternals = { } +local metapostshortcuts = { } +local metapostcommands = { } + +local metafuninternals = { } +local metafunshortcuts = { } +local metafuncommands = { } + +local mergedshortcuts = { } +local mergedinternals = { } + +do + + local definitions = context.loaddefinitions("scite-context-data-metapost") + + if definitions then + metapostprimitives = definitions.primitives or { } + metapostinternals = definitions.internals or { } + metapostshortcuts = definitions.shortcuts or { } + metapostcommands = definitions.commands or { } + end + + local definitions = context.loaddefinitions("scite-context-data-metafun") + + if definitions then + metafuninternals = definitions.internals or { } + metafunshortcuts = definitions.shortcuts or { } + metafuncommands = definitions.commands or { } + end + + for i=1,#metapostshortcuts do + mergedshortcuts[#mergedshortcuts+1] = metapostshortcuts[i] + end + for i=1,#metafunshortcuts do + mergedshortcuts[#mergedshortcuts+1] = metafunshortcuts[i] + end + + for i=1,#metapostinternals do + mergedinternals[#mergedinternals+1] = metapostinternals[i] + end + for i=1,#metafuninternals do + mergedinternals[#mergedinternals+1] = metafuninternals[i] + end + +end + +local space = patterns.space -- S(" \n\r\t\f\v") +local any = patterns.any + +local dquote = P('"') +local cstoken = patterns.idtoken +local mptoken = patterns.alpha +local leftbrace = P("{") +local rightbrace = P("}") +local number = patterns.real + +local cstokentex = R("az","AZ","\127\255") + S("@!?_") + +-- we could collapse as in tex + +local spacing = token(whitespace, space^1) +local rest = token("default", any) +local comment = token("comment", P("%") * (1-S("\n\r"))^0) +local internal = token("reserved", exact_match(mergedshortcuts,false)) +local shortcut = token("data", exact_match(mergedinternals)) +local helper = token("command", exact_match(metafuncommands)) +local plain = token("plain", exact_match(metapostcommands)) +local quoted = token("quote", dquote) + * token("string", P(1-dquote)^0) + * token("quote", dquote) +local texstuff = token("quote", P("btex ") + P("verbatimtex ")) + * token("string", P(1-P(" etex"))^0) + * token("quote", P(" etex")) +local primitive = token("primitive", exact_match(metapostprimitives)) +local identifier = token("default", cstoken^1) +local number = token("number", number) +local grouping = token("grouping", S("()[]{}")) -- can be an option +local special = token("special", S("#()[]{}<>=:\"")) -- or else := <> etc split +local texlike = token("warning", P("\\") * cstokentex^1) +local extra = token("extra", P("+-+") + P("++") + S("`~%^&_-+*/\'|\\")) + +local nested = P { leftbrace * (V(1) + (1-rightbrace))^0 * rightbrace } +local texlike = token("embedded", P("\\") * (P("MP") + P("mp")) * mptoken^1) + * spacing^0 + * token("grouping", leftbrace) + * token("default", (nested + (1-rightbrace))^0 ) + * token("grouping", rightbrace) + + token("warning", P("\\") * cstokentex^1) + +-- lua: we assume: lua ( "lua code" ) + +local cldlexer = lexer.load("scite-context-lexer-cld","mps-cld") + +local startlua = P("lua") * space^0 * P('(') * space^0 * P('"') +local stoplua = P('"') * space^0 * P(')') + +local startluacode = token("embedded", startlua) +local stopluacode = #stoplua * token("embedded", stoplua) + +lexer.embed_lexer(metafunlexer, cldlexer, startluacode, stopluacode) + +metafunlexer._rules = { + { "whitespace", spacing }, + { "comment", comment }, + { "internal", internal }, + { "shortcut", shortcut }, + { "helper", helper }, + { "plain", plain }, + { "primitive", primitive }, + { "texstuff", texstuff }, + { "identifier", identifier }, + { "number", number }, + { "quoted", quoted }, + -- { "grouping", grouping }, -- can be an option + { "special", special }, + { "texlike", texlike }, + { "extra", extra }, + { "rest", rest }, +} + +metafunlexer._tokenstyles = context.styleset + +metafunlexer._foldpattern = patterns.lower^2 -- separate entry else interference + +metafunlexer._foldsymbols = { + _patterns = { + "[a-z][a-z]+", + }, + ["plain"] = { + ["beginfig"] = 1, + ["endfig"] = -1, + ["beginglyph"] = 1, + ["endglyph"] = -1, + -- ["begingraph"] = 1, + -- ["endgraph"] = -1, + }, + ["primitive"] = { + ["def"] = 1, + ["vardef"] = 1, + ["primarydef"] = 1, + ["secondarydef" ] = 1, + ["tertiarydef"] = 1, + ["enddef"] = -1, + ["if"] = 1, + ["fi"] = -1, + ["for"] = 1, + ["forever"] = 1, + ["endfor"] = -1, + } +} + +-- if inspect then inspect(metafunlexer) end + +return metafunlexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-pdf-object.lua b/context/data/scite/context/lexers/scite-context-lexer-pdf-object.lua new file mode 100644 index 000000000..1fb95838a --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer-pdf-object.lua @@ -0,0 +1,136 @@ +local info = { + version = 1.002, + comment = "scintilla lpeg lexer for pdf objects", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +-- no longer used: nesting lexers with whitespace in start/stop is unreliable + +local P, R, S, C, V = lpeg.P, lpeg.R, lpeg.S, lpeg.C, lpeg.V + +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns + +local token = lexer.token + +local pdfobjectlexer = lexer.new("pdfobj","scite-context-lexer-pdf-object") +local whitespace = pdfobjectlexer.whitespace + +local space = patterns.space +local spacing = patterns.spacing +local nospacing = patterns.nospacing +local anything = patterns.anything +local newline = patterns.eol +local real = patterns.real +local cardinal = patterns.cardinal + +local lparent = P("(") +local rparent = P(")") +local langle = P("<") +local rangle = P(">") +local escape = P("\\") +local unicodetrigger = P("feff") + +local nametoken = 1 - space - S("<>/[]()") +local name = P("/") * nametoken^1 + +local p_string = P { ( escape * anything + lparent * V(1) * rparent + (1 - rparent) )^0 } + +local t_spacing = token(whitespace, spacing) +local t_spaces = token(whitespace, spacing)^0 +local t_rest = token("default", nospacing) -- anything + +local p_stream = P("stream") +local p_endstream = P("endstream") +local p_obj = P("obj") +local p_endobj = P("endobj") +local p_reference = P("R") + +local p_objectnumber = patterns.cardinal +local p_comment = P("%") * (1-S("\n\r"))^0 + +local t_string = token("quote", lparent) + * token("string", p_string) + * token("quote", rparent) +local t_unicode = token("quote", langle) + * token("plain", unicodetrigger) + * token("string", (1-rangle)^1) + * token("quote", rangle) +local t_whatsit = token("quote", langle) + * token("string", (1-rangle)^1) + * token("quote", rangle) +local t_keyword = token("command", name) +local t_constant = token("constant", name) +local t_number = token("number", real) +-- t_reference = token("number", cardinal) +-- * t_spacing +-- * token("number", cardinal) +local t_reserved = token("number", P("true") + P("false") + P("NULL")) +local t_reference = token("warning", cardinal) + * t_spacing + * token("warning", cardinal) + * t_spacing + * token("keyword", p_reference) + +local t_comment = token("comment", p_comment) + +local t_openobject = token("warning", p_objectnumber * spacing) +-- * t_spacing + * token("warning", p_objectnumber * spacing) +-- * t_spacing + * token("keyword", p_obj) +local t_closeobject = token("keyword", p_endobj) + +local t_opendictionary = token("grouping", P("<<")) +local t_closedictionary = token("grouping", P(">>")) + +local t_openarray = token("grouping", P("[")) +local t_closearray = token("grouping", P("]")) + +-- todo: comment + +local t_stream = token("keyword", p_stream) +-- * token("default", newline * (1-newline*p_endstream*newline)^1 * newline) +-- * token("text", (1 - p_endstream)^1) + * (token("text", (1 - p_endstream-spacing)^1) + t_spacing)^1 + * token("keyword", p_endstream) + +local t_dictionary = { "dictionary", + dictionary = t_opendictionary * (t_spaces * t_keyword * t_spaces * V("whatever"))^0 * t_spaces * t_closedictionary, + array = t_openarray * (t_spaces * V("whatever"))^0 * t_spaces * t_closearray, + whatever = V("dictionary") + V("array") + t_constant + t_reference + t_string + t_unicode + t_number + t_reserved + t_whatsit, + } + +----- t_object = { "object", -- weird that we need to catch the end here (probably otherwise an invalid lpeg) +----- object = t_spaces * (V("dictionary") * t_spaces * t_stream^-1 + V("array") + V("number") + t_spaces) * t_spaces * t_closeobject, +----- dictionary = t_opendictionary * (t_spaces * t_keyword * t_spaces * V("whatever"))^0 * t_spaces * t_closedictionary, +----- array = t_openarray * (t_spaces * V("whatever"))^0 * t_spaces * t_closearray, +----- whatever = V("dictionary") + V("array") + t_constant + t_reference + t_string + t_unicode + t_number + t_reserved + t_whatsit, +----- number = t_number, +----- } + +local t_object = { "object", -- weird that we need to catch the end here (probably otherwise an invalid lpeg) + dictionary = t_dictionary.dictionary, + array = t_dictionary.array, + whatever = t_dictionary.whatever, + object = t_openobject^-1 * t_spaces * (V("dictionary") * t_spaces * t_stream^-1 + V("array") + V("number") + t_spaces) * t_spaces * t_closeobject, + number = t_number, + } + +pdfobjectlexer._shared = { + dictionary = t_dictionary, + object = t_object, + stream = t_stream, +} + +pdfobjectlexer._rules = { + { "whitespace", t_spacing }, -- in fact, here we don't want whitespace as it's top level lexer work + { "object", t_object }, +} + +pdfobjectlexer._tokenstyles = context.styleset + +return pdfobjectlexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-pdf-xref.lua b/context/data/scite/context/lexers/scite-context-lexer-pdf-xref.lua new file mode 100644 index 000000000..7097c41a6 --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer-pdf-xref.lua @@ -0,0 +1,43 @@ +local info = { + version = 1.002, + comment = "scintilla lpeg lexer for pdf xref", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +-- no longer used: nesting lexers with whitespace in start/stop is unreliable + +local P, R = lpeg.P, lpeg.R + +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns + +local token = lexer.token + +local pdfxreflexer = lexer.new("pdfxref","scite-context-lexer-pdf-xref") +local whitespace = pdfxreflexer.whitespace + +local spacing = patterns.spacing +local cardinal = patterns.cardinal +local alpha = patterns.alpha + +local t_spacing = token(whitespace, spacing) + +local p_xref = P("xref") +local t_xref = token("keyword",p_xref) + * token("number", cardinal * spacing * cardinal * spacing) + +local t_number = token("number", cardinal * spacing * cardinal * spacing) + * token("keyword", alpha) + +pdfxreflexer._rules = { + { "whitespace", t_spacing }, + { "xref", t_xref }, + { "number", t_number }, +} + +pdfxreflexer._tokenstyles = context.styleset + +return pdfxreflexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-pdf.lua b/context/data/scite/context/lexers/scite-context-lexer-pdf.lua new file mode 100644 index 000000000..f8e4e7380 --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer-pdf.lua @@ -0,0 +1,204 @@ +local info = { + version = 1.002, + comment = "scintilla lpeg lexer for pdf", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +-- pdf is normally startic .. i.e. not edited so we don't really +-- need embedded lexers. + +local P, R, S, V = lpeg.P, lpeg.R, lpeg.S, lpeg.V + +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns + +local token = lexer.token + +local pdflexer = lexer.new("pdf","scite-context-lexer-pdf") +local whitespace = pdflexer.whitespace + +----- pdfobjectlexer = lexer.load("scite-context-lexer-pdf-object") +----- pdfxreflexer = lexer.load("scite-context-lexer-pdf-xref") + +local anything = patterns.anything +local space = patterns.space +local spacing = patterns.spacing +local nospacing = patterns.nospacing +local anything = patterns.anything +local restofline = patterns.restofline + +local t_whitespace = token(whitespace, spacing) +local t_spacing = token("default", spacing) +----- t_rest = token("default", nospacing) +local t_rest = token("default", anything) + +local p_comment = P("%") * restofline +local t_comment = token("comment", p_comment) + +-- whatever + +local space = patterns.space +local spacing = patterns.spacing +local nospacing = patterns.nospacing +local anything = patterns.anything +local newline = patterns.eol +local real = patterns.real +local cardinal = patterns.cardinal +local alpha = patterns.alpha + +local lparent = P("(") +local rparent = P(")") +local langle = P("<") +local rangle = P(">") +local escape = P("\\") +local unicodetrigger = P("feff") + +local nametoken = 1 - space - S("<>/[]()") +local name = P("/") * nametoken^1 + +local p_string = P { ( escape * anything + lparent * V(1) * rparent + (1 - rparent) )^0 } + +local t_spacing = token("default", spacing) +local t_spaces = token("default", spacing)^0 +local t_rest = token("default", nospacing) -- anything + +local p_stream = P("stream") +local p_endstream = P("endstream") +local p_obj = P("obj") +local p_endobj = P("endobj") +local p_reference = P("R") + +local p_objectnumber = patterns.cardinal +local p_comment = P("%") * (1-S("\n\r"))^0 + +local t_string = token("quote", lparent) + * token("string", p_string) + * token("quote", rparent) +local t_unicode = token("quote", langle) + * token("plain", unicodetrigger) + * token("string", (1-rangle)^1) + * token("quote", rangle) +local t_whatsit = token("quote", langle) + * token("string", (1-rangle)^1) + * token("quote", rangle) +local t_keyword = token("command", name) +local t_constant = token("constant", name) +local t_number = token("number", real) +-- t_reference = token("number", cardinal) +-- * t_spacing +-- * token("number", cardinal) +local t_reserved = token("number", P("true") + P("false") + P("NULL")) +-- t_reference = token("warning", cardinal * spacing * cardinal * spacing) +-- * token("keyword", p_reference) +local t_reference = token("warning", cardinal) + * t_spacing + * token("warning", cardinal) + * t_spacing + * token("keyword", p_reference) + +local t_comment = token("comment", p_comment) + +local t_openobject = token("warning", p_objectnumber) + * t_spacing + * token("warning", p_objectnumber) + * t_spacing + * token("keyword", p_obj) +-- t_openobject = token("warning", p_objectnumber * spacing) +-- * token("warning", p_objectnumber * spacing) +-- * token("keyword", p_obj) +local t_closeobject = token("keyword", p_endobj) + +local t_opendictionary = token("grouping", P("<<")) +local t_closedictionary = token("grouping", P(">>")) + +local t_openarray = token("grouping", P("[")) +local t_closearray = token("grouping", P("]")) + +local t_stream = token("keyword", p_stream) + * token("text", (1 - p_endstream)^1) + * token("keyword", p_endstream) + +local t_dictionary = { "dictionary", + dictionary = t_opendictionary * (t_spaces * t_keyword * t_spaces * V("whatever"))^0 * t_spaces * t_closedictionary, + array = t_openarray * (t_spaces * V("whatever"))^0 * t_spaces * t_closearray, + whatever = V("dictionary") + V("array") + t_constant + t_reference + t_string + t_unicode + t_number + t_reserved + t_whatsit, + } + +local t_object = { "object", -- weird that we need to catch the end here (probably otherwise an invalid lpeg) + dictionary = t_dictionary.dictionary, + array = t_dictionary.array, + whatever = t_dictionary.whatever, + object = t_openobject * t_spaces * (V("dictionary")^-1 * t_spaces * t_stream^-1 + V("array") + V("number") + t_spaces) * t_spaces * t_closeobject, + number = t_number, + } + +-- objects ... sometimes NUL characters play havoc ... and in xref we have +-- issues with embedded lexers that have spaces in the start and stop +-- conditions and this cannot be handled well either ... so, an imperfect +-- solution ... but anyway, there is not that much that can end up in +-- the root of the tree see we're sort of safe + +local p_trailer = P("trailer") +local t_trailer = token("keyword", p_trailer) + * t_spacing + * t_dictionary +-- t_trailer = token("keyword", p_trailer * spacing) +-- * t_dictionary + +local p_startxref = P("startxref") +local t_startxref = token("keyword", p_startxref) + * t_spacing + * token("number", cardinal) +-- t_startxref = token("keyword", p_startxref * spacing) +-- * token("number", cardinal) + +local p_xref = P("xref") +local t_xref = token("keyword",p_xref) + * t_spacing + * token("number", cardinal) + * t_spacing + * token("number", cardinal) + * spacing +-- t_xref = token("keyword",p_xref) +-- * token("number", spacing * cardinal * spacing * cardinal * spacing) + +local t_number = token("number", cardinal) + * t_spacing + * token("number", cardinal) + * t_spacing + * token("keyword", S("fn")) +-- t_number = token("number", cardinal * spacing * cardinal * spacing) +-- * token("keyword", S("fn")) + +pdflexer._rules = { + { "whitespace", t_whitespace }, + { "object", t_object }, + { "comment", t_comment }, + { "trailer", t_trailer }, + { "startxref", t_startxref }, + { "xref", t_xref }, + { "number", t_number }, + { "rest", t_rest }, +} + +pdflexer._tokenstyles = context.styleset + +-- lexer.inspect(pdflexer) + +-- collapser: obj endobj stream endstream + +pdflexer._foldpattern = p_obj + p_endobj + p_stream + p_endstream + +pdflexer._foldsymbols = { + ["keyword"] = { + ["obj"] = 1, + ["endobj"] = -1, + ["stream"] = 1, + ["endstream"] = -1, + }, +} + +return pdflexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-tex-web.lua b/context/data/scite/context/lexers/scite-context-lexer-tex-web.lua new file mode 100644 index 000000000..5d8859c26 --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer-tex-web.lua @@ -0,0 +1,23 @@ +local info = { + version = 1.002, + comment = "scintilla lpeg lexer for tex web", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns + +local texweblexer = lexer.new("tex-web","scite-context-lexer-tex") +local texlexer = lexer.load("scite-context-lexer-tex") + +-- can probably be done nicer now, a bit of a hack + +texweblexer._rules = texlexer._rules_web +texweblexer._tokenstyles = texlexer._tokenstyles +texweblexer._foldsymbols = texlexer._foldsymbols +texweblexer._directives = texlexer._directives + +return texweblexer diff --git a/context/data/scite/lexers/scite-context-lexer-tex.lua b/context/data/scite/context/lexers/scite-context-lexer-tex.lua index a509fadab..d67be2cd8 100644 --- a/context/data/scite/lexers/scite-context-lexer-tex.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-tex.lua @@ -24,33 +24,26 @@ local info = { -- local interface = props["keywordclass.macros.context.en"] -- local interface = lexer.get_property("keywordclass.macros.context.en","") - -- it seems that whitespace triggers the lexer when embedding happens, but this - -- is quite fragile due to duplicate styles .. lexer.WHITESPACE is a number - -- (initially) ... _NAME vs filename (but we don't want to overwrite files) - - -- this lexer does not care about other macro packages (one can of course add a fake - -- interface but it's not on the agenda) - ]]-- -if not lexer._CONTEXTEXTENSIONS then require("scite-context-lexer") end - -local lexer = lexer local global, string, table, lpeg = _G, string, table, lpeg -local token, exact_match = lexer.token, lexer.exact_match local P, R, S, V, C, Cmt, Cp, Cc, Ct = lpeg.P, lpeg.R, lpeg.S, lpeg.V, lpeg.C, lpeg.Cmt, lpeg.Cp, lpeg.Cc, lpeg.Ct local type, next = type, next local find, match, lower, upper = string.find, string.match, string.lower, string.upper --- module(...) - -local contextlexer = { _NAME = "tex", _FILENAME = "scite-context-lexer-tex" } -local whitespace = lexer.WHITESPACE +local lexer = require("lexer") local context = lexer.context +local patterns = context.patterns +local inform = context.inform -local cldlexer = lexer.load('scite-context-lexer-cld') ------ cldlexer = lexer.load('scite-context-lexer-lua') -local mpslexer = lexer.load('scite-context-lexer-mps') +local token = lexer.token +local exact_match = lexer.exact_match + +local contextlexer = lexer.new("tex","scite-context-lexer-tex") +local whitespace = contextlexer.whitespace + +local cldlexer = lexer.load("scite-context-lexer-cld") +local mpslexer = lexer.load("scite-context-lexer-mps") local commands = { en = { } } local primitives = { } @@ -64,7 +57,9 @@ do -- todo: only once, store in global local definitions = context.loaddefinitions("scite-context-data-interfaces") if definitions then + local list = { } for interface, list in next, definitions do + list[#list+1] = interface local c = { } for i=1,#list do c[list[i]] = true @@ -79,6 +74,7 @@ do -- todo: only once, store in global end commands[interface] = c end + inform("context user interfaces '%s' supported",table.concat(list," ")) end local definitions = context.loaddefinitions("scite-context-data-context") @@ -146,13 +142,16 @@ local validminimum = 3 -- % language=uk -local knownpreamble = Cmt(#P("% "), function(input,i,_) -- todo : utfbomb +-- fails (empty loop message) ... latest lpeg issue? + +local knownpreamble = Cmt(P("% "), function(input,i,_) -- todo : utfbomb, was #P("% ") if i < 10 then validwords, validminimum = false, 3 - local s, e, word = find(input,'^(.+)[\n\r]',i) -- combine with match + local s, e, word = find(input,"^(.+)[\n\r]",i) -- combine with match if word then local interface = match(word,"interface=([a-z]+)") - if interface then + if interface and #interface == 2 then + inform("enabling context user interface '%s'",interface) currentcommands = commands[interface] or commands.en or { } end local language = match(word,"language=([a-z]+)") @@ -170,7 +169,7 @@ end) -- local helpers_hash = { } for i=1,#helpers do helpers_hash [helpers [i]] = true end -- local primitives_hash = { } for i=1,#primitives do primitives_hash[primitives[i]] = true end --- local specialword = Ct( P('\\') * Cmt( C(cstoken^1), function(input,i,s) +-- local specialword = Ct( P("\\") * Cmt( C(cstoken^1), function(input,i,s) -- if currentcommands[s] then -- return true, "command", i -- elseif constants_hash[s] then @@ -184,7 +183,7 @@ end) -- end -- end) ) --- local specialword = P('\\') * Cmt( C(cstoken^1), function(input,i,s) +-- local specialword = P("\\") * Cmt( C(cstoken^1), function(input,i,s) -- if currentcommands[s] then -- return true, { "command", i } -- elseif constants_hash[s] then @@ -202,11 +201,11 @@ end) -- 10pt -local commentline = P('%') * (1-S("\n\r"))^0 +local commentline = P("%") * (1-S("\n\r"))^0 local endline = S("\n\r")^1 -local space = lexer.space -- S(" \n\r\t\f\v") -local any = lexer.any +local space = patterns.space -- S(" \n\r\t\f\v") +local any = patterns.any local backslash = P("\\") local hspace = S(" \t") @@ -219,7 +218,7 @@ local p_command = backslash * knowncommand local p_constant = backslash * exact_match(constants) local p_helper = backslash * exact_match(helpers) local p_primitive = backslash * exact_match(primitives) -local p_ifprimitive = P('\\if') * cstoken^1 +local p_ifprimitive = P("\\if") * cstoken^1 local p_csname = backslash * (cstoken^1 + P(1)) local p_grouping = S("{$}") local p_special = S("#()[]<>=\"") @@ -299,24 +298,24 @@ local p_invisible = invisibles^1 local spacing = token(whitespace, p_spacing ) -local rest = token('default', p_rest ) -local preamble = token('preamble', p_preamble ) -local comment = token('comment', p_comment ) -local command = token('command', p_command ) -local constant = token('data', p_constant ) -local helper = token('plain', p_helper ) -local primitive = token('primitive', p_primitive ) -local ifprimitive = token('primitive', p_ifprimitive) -local reserved = token('reserved', p_reserved ) -local csname = token('user', p_csname ) -local grouping = token('grouping', p_grouping ) -local number = token('number', p_number ) - * token('constant', p_unit ) -local special = token('special', p_special ) -local reserved = token('reserved', p_reserved ) -- reserved internal preproc -local extra = token('extra', p_extra ) -local invisible = token('invisible', p_invisible ) -local text = token('default', p_text ) +local rest = token("default", p_rest ) +local preamble = token("preamble", p_preamble ) +local comment = token("comment", p_comment ) +local command = token("command", p_command ) +local constant = token("data", p_constant ) +local helper = token("plain", p_helper ) +local primitive = token("primitive", p_primitive ) +local ifprimitive = token("primitive", p_ifprimitive) +local reserved = token("reserved", p_reserved ) +local csname = token("user", p_csname ) +local grouping = token("grouping", p_grouping ) +local number = token("number", p_number ) + * token("constant", p_unit ) +local special = token("special", p_special ) +local reserved = token("reserved", p_reserved ) -- reserved internal preproc +local extra = token("extra", p_extra ) +local invisible = token("invisible", p_invisible ) +local text = token("default", p_text ) local word = p_word ----- startluacode = token("grouping", P("\\startluacode")) @@ -390,10 +389,11 @@ contextlexer._reset_parser = function() end local luaenvironment = P("lua") * (P("setups") + P("code") + P(true)) + + P("ctxfunction") * (P("definition") + P(true)) local inlinelua = P("\\") * ( - P("ctx") * ( P("lua") + P("command") + P("late") * (P("lua") + P("command")) ) - + P("cld") * ( P("command") + P("context") ) + P("ctx") * (P("lua") + P("command") + P("late") * (P("lua") + P("command")) + P("function")) + + P("cld") * (P("command") + P("context")) + P("luaexpr") + (P("direct") + P("late")) * P("lua") ) @@ -434,9 +434,6 @@ local callers = token("embedded", P("\\") * metafuncall) * metafu lexer.embed_lexer(contextlexer, cldlexer, startluacode, stopluacode) lexer.embed_lexer(contextlexer, mpslexer, startmetafuncode, stopmetafuncode) --- Watch the text grabber, after all, we're talking mostly of text (beware, --- no punctuation here as it can be special. We might go for utf here. - contextlexer._rules = { { "whitespace", spacing }, { "preamble", preamble }, @@ -460,11 +457,61 @@ contextlexer._rules = { { "rest", rest }, } -contextlexer._tokenstyles = context.styleset --- contextlexer._tokenstyles = context.stylesetcopy() -- experiment +-- Watch the text grabber, after all, we're talking mostly of text (beware, +-- no punctuation here as it can be special). We might go for utf here. + +local web = lexer.loadluafile("scite-context-lexer-web-snippets") + +if web then + + lexer.inform("supporting web snippets in tex lexer") + + contextlexer._rules_web = { + { "whitespace", spacing }, + { "text", text }, -- non words + { "comment", comment }, + { "constant", constant }, + { "callers", callers }, + { "helper", helper }, + { "command", command }, + { "primitive", primitive }, + { "ifprimitive", ifprimitive }, + { "reserved", reserved }, + { "csname", csname }, + { "grouping", grouping }, + { "special", special }, + { "extra", extra }, + { "invisible", invisible }, + { "web", web.pattern }, + { "rest", rest }, + } + +else + + lexer.report("not supporting web snippets in tex lexer") + + contextlexer._rules_web = { + { "whitespace", spacing }, + { "text", text }, -- non words + { "comment", comment }, + { "constant", constant }, + { "callers", callers }, + { "helper", helper }, + { "command", command }, + { "primitive", primitive }, + { "ifprimitive", ifprimitive }, + { "reserved", reserved }, + { "csname", csname }, + { "grouping", grouping }, + { "special", special }, + { "extra", extra }, + { "invisible", invisible }, + { "rest", rest }, + } --- contextlexer._tokenstyles[#contextlexer._tokenstyles + 1] = { cldlexer._NAME..'_whitespace', lexer.style_whitespace } --- contextlexer._tokenstyles[#contextlexer._tokenstyles + 1] = { mpslexer._NAME..'_whitespace', lexer.style_whitespace } +end + +contextlexer._tokenstyles = context.styleset local environment = { ["\\start"] = 1, ["\\stop"] = -1, @@ -495,4 +542,6 @@ contextlexer._foldsymbols = { -- these need to be style references ["grouping"] = group, } +-- context.inspect(contextlexer) + return contextlexer diff --git a/context/data/scite/lexers/scite-context-lexer-txt.lua b/context/data/scite/context/lexers/scite-context-lexer-txt.lua index fe062fb94..43eec2c35 100644 --- a/context/data/scite/lexers/scite-context-lexer-txt.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-txt.lua @@ -6,22 +6,23 @@ local info = { license = "see context related readme files", } -if not lexer._CONTEXTEXTENSIONS then require("scite-context-lexer") end - -local lexer = lexer -local token = lexer.token -local P, S, Cmt, Cp, Ct = lpeg.P, lpeg.S, lpeg.Cmt, lpeg.Cp, lpeg.Ct +local P, S, Cmt, Cp = lpeg.P, lpeg.S, lpeg.Cmt, lpeg.Cp local find, match = string.find, string.match -local textlexer = { _NAME = "txt", _FILENAME = "scite-context-lexer-txt" } -local whitespace = lexer.WHITESPACE +local lexer = require("lexer") local context = lexer.context +local patterns = context.patterns + +local token = lexer.token + +local textlexer = lexer.new("txt","scite-context-lexer-txt") +local whitespace = textlexer.whitespace -local space = lexer.space -local any = lexer.any +local space = patterns.space +local any = patterns.any +local wordtoken = patterns.wordtoken +local wordpattern = patterns.wordpattern -local wordtoken = context.patterns.wordtoken -local wordpattern = context.patterns.wordpattern local checkedword = context.checkedword local styleofword = context.styleofword local setwordlist = context.setwordlist @@ -36,10 +37,10 @@ local validminimum = 3 -- [#!-%] language=uk -local p_preamble = Cmt(#(S("#!-%") * P(" ")), function(input,i,_) -- todo: utf bomb +local p_preamble = Cmt((S("#!-%") * P(" ")), function(input,i,_) -- todo: utf bomb no longer # if i == 1 then -- < 10 then validwords, validminimum = false, 3 - local s, e, line = find(input,'^[#!%-%%](.+)[\n\r]',i) + local s, e, line = find(input,"^[#!%-%%](.+)[\n\r]",i) if line then local language = match(line,"language=([a-z]+)") if language then @@ -54,7 +55,6 @@ local t_preamble = token("preamble", p_preamble) local t_word = --- Ct( wordpattern / function(s) return styleofword(validwords,validminimum,s) end * Cp() ) -- the function can be inlined wordpattern / function(s) return styleofword(validwords,validminimum,s) end * Cp() -- the function can be inlined local t_text = diff --git a/context/data/scite/context/lexers/scite-context-lexer-web-snippets.lua b/context/data/scite/context/lexers/scite-context-lexer-web-snippets.lua new file mode 100644 index 000000000..196a545bc --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer-web-snippets.lua @@ -0,0 +1,133 @@ +local info = { + version = 1.002, + comment = "scintilla lpeg lexer for web snippets", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +local P, R, S, C, Cg, Cb, Cs, Cmt, lpegmatch = lpeg.P, lpeg.R, lpeg.S, lpeg.C, lpeg.Cg, lpeg.Cb, lpeg.Cs, lpeg.Cmt, lpeg.match + +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns + +local token = lexer.token + +local websnippets = { } + +local space = patterns.space -- S(" \n\r\t\f\v") +local any = patterns.any +local restofline = patterns.restofline +local startofline = patterns.startofline + +local squote = P("'") +local dquote = P('"') +local period = P(".") + +local t_whitespace = token(whitespace, space^1) +local t_spacing = token("default", space^1) +local t_rest = token("default", any) + +-- the web subset + +local p_beginofweb = P("@") +local p_endofweb = P("@>") + +-- @, @/ @| @# @+ @; @[ @] + +local p_directive_1 = p_beginofweb * S(",/|#+;[]") +local t_directive_1 = token("label",p_directive_1) + +-- @.text @>(monospaced) +-- @:text @>(macro driven) +-- @= verbose@> +-- @! underlined @> +-- @t text @> (hbox) +-- @q ignored @> + +local p_typeset = p_beginofweb * S(".:=!tq") +local t_typeset = token("label",p_typeset) * token("warning",(1-p_endofweb)^1) * token("label",p_endofweb) + +-- @^index@> + +local p_index = p_beginofweb * P("^") +local t_index = token("label",p_index) * token("function",(1-p_endofweb)^1) * token("label",p_endofweb) + +-- @f text renderclass + +local p_render = p_beginofweb * S("f") +local t_render = token("label",p_render) * t_spacing * token("warning",(1-space)^1) * t_spacing * token("label",(1-space)^1) + +-- @s idem +-- @p idem +-- @& strip (spaces before) +-- @h + +local p_directive_2 = p_beginofweb * S("sp&h") +local t_directive_2 = token("label",p_directive_2) + +-- @< ... @> [=|+=|] +-- @(foo@> + +local p_reference = p_beginofweb * S("<(") +local t_reference = token("label",p_reference) * token("function",(1-p_endofweb)^1) * token("label",p_endofweb * (P("+=") + P("="))^-1) + +-- @'char' (ascii code) + +local p_character = p_beginofweb * S("'") +local t_character = token("label",p_character) * token("reserved",(1-squote)^1) * token("label",squote) + +-- @l nonascii + +local p_nonascii = p_beginofweb * S("l") +local t_nonascii = token("label",p_nonascii) * t_spacing * token("reserved",(1-space)^1) + +-- @x @y @z changefile +-- @i webfile + +local p_filename = p_beginofweb * S("xyzi") +local t_filename = token("label",p_filename) * t_spacing * token("reserved",(1-space)^1) + +-- @@ escape + +local p_escape = p_beginofweb * p_beginofweb +local t_escape = token("text",p_escape) + +-- structure + +-- @* title. + +-- local p_section = p_beginofweb * P("*")^1 +-- local t_section = token("label",p_section) * t_spacing * token("function",(1-period)^1) * token("label",period) + +-- @ explanation + +-- local p_explanation = p_beginofweb +-- local t_explanation = token("label",p_explanation) * t_spacing^1 + +-- @d macro + +-- local p_macro = p_beginofweb * P("d") +-- local t_macro = token("label",p_macro) + +-- @c code + +-- local p_code = p_beginofweb * P("c") +-- local t_code = token("label",p_code) + +websnippets.pattern = P ( + t_typeset + + t_index + + t_render + + t_reference + + t_filename + + t_directive_1 + + t_directive_2 + + t_character + + t_nonascii + + t_escape +) + + +return websnippets diff --git a/context/data/scite/context/lexers/scite-context-lexer-web.lua b/context/data/scite/context/lexers/scite-context-lexer-web.lua new file mode 100644 index 000000000..86ae76644 --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer-web.lua @@ -0,0 +1,67 @@ +local info = { + version = 1.003, + comment = "scintilla lpeg lexer for web", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +local P, R, S = lpeg.P, lpeg.R, lpeg.S + +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns + +local token = lexer.token +local exact_match = lexer.exact_match + +local weblexer = lexer.new("web","scite-context-lexer-web") +local whitespace = weblexer.whitespace + +local space = patterns.space -- S(" \n\r\t\f\v") +local any = patterns.any +local restofline = patterns.restofline +local startofline = patterns.startofline + +local period = P(".") +local percent = P("%") + +local spacing = token(whitespace, space^1) +local rest = token("default", any) + +local eop = P("@>") +local eos = eop * P("+")^-1 * P("=") + +-- we can put some of the next in the web-snippets file +-- is f okay here? + +local texcomment = token("comment", percent * restofline^0) + +local texpart = token("label",P("@")) * #spacing + + token("label",P("@") * P("*")^1) * token("function",(1-period)^1) * token("label",period) +local midpart = token("label",P("@d")) * #spacing + + token("label",P("@f")) * #spacing +local cpppart = token("label",P("@c")) * #spacing + + token("label",P("@p")) * #spacing + + token("label",P("@") * S("<(")) * token("function",(1-eop)^1) * token("label",eos) + +local anypart = P("@") * ( P("*")^1 + S("dfcp") + space^1 + S("<(") * (1-eop)^1 * eos ) +local limbo = 1 - anypart - percent + +local texlexer = lexer.load("scite-context-lexer-tex-web") +local cpplexer = lexer.load("scite-context-lexer-cpp-web") + +lexer.embed_lexer(weblexer, texlexer, texpart + limbo, #anypart) +lexer.embed_lexer(weblexer, cpplexer, cpppart + midpart, #anypart) + +local texcomment = token("comment", percent * restofline^0) + +weblexer._rules = { + { "whitespace", spacing }, + { "texcomment", texcomment }, -- else issues with first tex section + { "rest", rest }, +} + +weblexer._tokenstyles = context.styleset + +return weblexer diff --git a/context/data/scite/lexers/scite-context-lexer-xml-cdata.lua b/context/data/scite/context/lexers/scite-context-lexer-xml-cdata.lua index 97253e140..e6276da0d 100644 --- a/context/data/scite/lexers/scite-context-lexer-xml-cdata.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-xml-cdata.lua @@ -6,23 +6,26 @@ local info = { license = "see context related readme files", } -local lexer = lexer -local token = lexer.token local P = lpeg.P -local xmlcdatalexer = { _NAME = "xml-cdata", _FILENAME = "scite-context-lexer-xml-cdata" } -local whitespace = lexer.WHITESPACE -- triggers states +local lexer = require("lexer") local context = lexer.context +local patterns = context.patterns -local space = lexer.space +local token = lexer.token + +local xmlcdatalexer = lexer.new("xml-cdata","scite-context-lexer-xml-cdata") +local whitespace = xmlcdatalexer.whitespace + +local space = patterns.space local nospace = 1 - space - P("]]>") -local p_spaces = token(whitespace, space ^1) -local p_cdata = token("comment", nospace^1) +local t_spaces = token(whitespace, space ^1) +local t_cdata = token("comment", nospace^1) xmlcdatalexer._rules = { - { "whitespace", p_spaces }, - { "cdata", p_cdata }, + { "whitespace", t_spaces }, + { "cdata", t_cdata }, } xmlcdatalexer._tokenstyles = context.styleset diff --git a/context/data/scite/context/lexers/scite-context-lexer-xml-comment.lua b/context/data/scite/context/lexers/scite-context-lexer-xml-comment.lua new file mode 100644 index 000000000..b5b3fefe0 --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer-xml-comment.lua @@ -0,0 +1,33 @@ +local info = { + version = 1.002, + comment = "scintilla lpeg lexer for xml comments", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +local P = lpeg.P + +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns + +local token = lexer.token + +local xmlcommentlexer = lexer.new("xml-comment","scite-context-lexer-xml-comment") +local whitespace = xmlcommentlexer.whitespace + +local space = patterns.space +local nospace = 1 - space - P("-->") + +local t_spaces = token(whitespace, space ^1) +local t_comment = token("comment", nospace^1) + +xmlcommentlexer._rules = { + { "whitespace", t_spaces }, + { "comment", t_comment }, +} + +xmlcommentlexer._tokenstyles = context.styleset + +return xmlcommentlexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-xml-script.lua b/context/data/scite/context/lexers/scite-context-lexer-xml-script.lua new file mode 100644 index 000000000..bbb938dc5 --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer-xml-script.lua @@ -0,0 +1,33 @@ +local info = { + version = 1.002, + comment = "scintilla lpeg lexer for xml script", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +local P = lpeg.P + +local lexer = require("lexer") +local context = lexer.context +local patterns = context.patterns + +local token = lexer.token + +local xmlscriptlexer = lexer.new("xml-script","scite-context-lexer-xml-script") +local whitespace = xmlscriptlexer.whitespace + +local space = patterns.space +local nospace = 1 - space - (P("</") * P("script") + P("SCRIPT")) * P(">") + +local t_spaces = token(whitespace, space ^1) +local t_script = token("default", nospace^1) + +xmlscriptlexer._rules = { + { "whitespace", t_spaces }, + { "script", t_script }, +} + +xmlscriptlexer._tokenstyles = context.styleset + +return xmlscriptlexer diff --git a/context/data/scite/lexers/scite-context-lexer-xml.lua b/context/data/scite/context/lexers/scite-context-lexer-xml.lua index 241e22591..77c89b1d6 100644 --- a/context/data/scite/lexers/scite-context-lexer-xml.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-xml.lua @@ -12,26 +12,28 @@ local info = { -- todo: parse entities in attributes -if not lexer._CONTEXTEXTENSIONS then require("scite-context-lexer") end - -local lexer = lexer local global, string, table, lpeg = _G, string, table, lpeg -local token, exact_match = lexer.token, lexer.exact_match -local P, R, S, V, C, Cmt, Ct, Cp = lpeg.P, lpeg.R, lpeg.S, lpeg.V, lpeg.C, lpeg.Cmt, lpeg.Ct, lpeg.Cp +local P, R, S, C, Cmt, Cp = lpeg.P, lpeg.R, lpeg.S, lpeg.C, lpeg.Cmt, lpeg.Cp local type = type local match, find = string.match, string.find -local xmllexer = { _NAME = "xml", _FILENAME = "scite-context-lexer-xml" } -local whitespace = lexer.WHITESPACE -- triggers states +local lexer = require("lexer") local context = lexer.context +local patterns = context.patterns + +local token = lexer.token +local exact_match = lexer.exact_match -local xmlcommentlexer = lexer.load("scite-context-lexer-xml-comment") -- indirect (some issue with the lexer framework) -local xmlcdatalexer = lexer.load("scite-context-lexer-xml-cdata") -- indirect (some issue with the lexer framework) -local xmlscriptlexer = lexer.load("scite-context-lexer-xml-script") -- indirect (some issue with the lexer framework) -local lualexer = lexer.load("scite-context-lexer-lua") -- +local xmllexer = lexer.new("xml","scite-context-lexer-xml") +local whitespace = xmllexer.whitespace -local space = lexer.space -- S(" \t\n\r\v\f") -local any = lexer.any -- P(1) +local xmlcommentlexer = lexer.load("scite-context-lexer-xml-comment") +local xmlcdatalexer = lexer.load("scite-context-lexer-xml-cdata") +local xmlscriptlexer = lexer.load("scite-context-lexer-xml-script") +local lualexer = lexer.load("scite-context-lexer-lua") + +local space = patterns.space +local any = patterns.any local dquote = P('"') local squote = P("'") @@ -40,7 +42,7 @@ local semicolon = P(";") local equal = P("=") local ampersand = P("&") -local name = (R("az","AZ","09") + S('_-.'))^1 +local name = (R("az","AZ","09") + S("_-."))^1 local openbegin = P("<") local openend = P("</") local closebegin = P("/>") + P(">") @@ -84,12 +86,12 @@ local validminimum = 3 -- -- <?context-directive editor language us ?> -local p_preamble = Cmt(#P("<?xml "), function(input,i,_) -- todo: utf bomb +local t_preamble = Cmt(P("<?xml "), function(input,i,_) -- todo: utf bomb, no longer # if i < 200 then validwords, validminimum = false, 3 local language = match(input,"^<%?xml[^>]*%?>%s*<%?context%-directive%s+editor%s+language%s+(..)%s+%?>") -- if not language then - -- language = match(input,'^<%?xml[^>]*language=[\"\'](..)[\"\'][^>]*%?>',i) + -- language = match(input,"^<%?xml[^>]*language=[\"\'](..)[\"\'][^>]*%?>",i) -- end if language then validwords, validminimum = setwordlist(language) @@ -98,24 +100,23 @@ local p_preamble = Cmt(#P("<?xml "), function(input,i,_) -- todo: utf bomb return false end) -local p_word = +local t_word = -- Ct( iwordpattern / function(s) return styleofword(validwords,validminimum,s) end * Cp() ) -- the function can be inlined iwordpattern / function(s) return styleofword(validwords,validminimum,s) end * Cp() -- the function can be inlined -local p_rest = +local t_rest = token("default", any) -local p_text = +local t_text = token("default", (1-S("<>&")-space)^1) -local p_spacing = +local t_spacing = token(whitespace, space^1) --- token("whitespace", space^1) -local p_optionalwhitespace = - p_spacing^0 +local t_optionalwhitespace = + token("default", space^1)^0 -local p_localspacing = +local t_localspacing = token("default", space^1) -- Because we want a differently colored open and close we need an embedded lexer (whitespace @@ -123,22 +124,22 @@ local p_localspacing = -- Even using different style keys is not robust as they can be shared. I'll fix the main -- lexer code. -local p_sstring = +local t_sstring = token("quote",dquote) * token("string",(1-dquote)^0) -- different from context * token("quote",dquote) -local p_dstring = +local t_dstring = token("quote",squote) * token("string",(1-squote)^0) -- different from context * token("quote",squote) --- local p_comment = +-- local t_comment = -- token("command",opencomment) -- * token("comment",(1-closecomment)^0) -- different from context -- * token("command",closecomment) --- local p_cdata = +-- local t_cdata = -- token("command",opencdata) -- * token("comment",(1-closecdata)^0) -- different from context -- * token("command",closecdata) @@ -156,74 +157,74 @@ local p_dstring = -- <!ENTITY xxxx PUBLIC "yyyy" > -- <!ENTITY xxxx "yyyy" > -local p_docstr = p_dstring + p_sstring +local t_docstr = t_dstring + t_sstring -local p_docent = token("command",P("<!ENTITY")) - * p_optionalwhitespace +local t_docent = token("command",P("<!ENTITY")) + * t_optionalwhitespace * token("keyword",name) - * p_optionalwhitespace + * t_optionalwhitespace * ( ( token("constant",P("SYSTEM")) - * p_optionalwhitespace - * p_docstr - * p_optionalwhitespace + * t_optionalwhitespace + * t_docstr + * t_optionalwhitespace * token("constant",P("NDATA")) - * p_optionalwhitespace + * t_optionalwhitespace * token("keyword",name) ) + ( token("constant",P("PUBLIC")) - * p_optionalwhitespace - * p_docstr + * t_optionalwhitespace + * t_docstr ) + ( - p_docstr + t_docstr ) ) - * p_optionalwhitespace + * t_optionalwhitespace * token("command",P(">")) -local p_docele = token("command",P("<!ELEMENT")) - * p_optionalwhitespace +local t_docele = token("command",P("<!ELEMENT")) + * t_optionalwhitespace * token("keyword",name) - * p_optionalwhitespace + * t_optionalwhitespace * token("command",P("(")) * ( - p_spacing + t_localspacing + token("constant",P("#CDATA") + P("#PCDATA") + P("ANY")) + token("text",P(",")) + token("comment",(1-S(",)"))^1) )^1 * token("command",P(")")) - * p_optionalwhitespace + * t_optionalwhitespace * token("command",P(">")) -local p_docset = token("command",P("[")) - * p_optionalwhitespace - * ((p_optionalwhitespace * (p_docent + p_docele))^1 + token("comment",(1-P("]"))^0)) - * p_optionalwhitespace +local t_docset = token("command",P("[")) + * t_optionalwhitespace + * ((t_optionalwhitespace * (t_docent + t_docele))^1 + token("comment",(1-P("]"))^0)) + * t_optionalwhitespace * token("command",P("]")) -local p_doctype = token("command",P("<!DOCTYPE")) - * p_optionalwhitespace +local t_doctype = token("command",P("<!DOCTYPE")) + * t_optionalwhitespace * token("keyword",name) - * p_optionalwhitespace + * t_optionalwhitespace * ( ( token("constant",P("PUBLIC")) - * p_optionalwhitespace - * p_docstr - * p_optionalwhitespace - * p_docstr - * p_optionalwhitespace + * t_optionalwhitespace + * t_docstr + * t_optionalwhitespace + * t_docstr + * t_optionalwhitespace ) + ( token("constant",P("SYSTEM")) - * p_optionalwhitespace - * p_docstr - * p_optionalwhitespace + * t_optionalwhitespace + * t_docstr + * t_optionalwhitespace ) )^-1 - * p_docset^-1 - * p_optionalwhitespace + * t_docset^-1 + * t_optionalwhitespace * token("command",P(">")) lexer.embed_lexer(xmllexer, lualexer, token("command", openlua), token("command", closelua)) @@ -231,7 +232,7 @@ lexer.embed_lexer(xmllexer, xmlcommentlexer, token("command", opencomment), toke lexer.embed_lexer(xmllexer, xmlcdatalexer, token("command", opencdata), token("command", closecdata)) lexer.embed_lexer(xmllexer, xmlscriptlexer, token("command", openscript), token("command", closescript)) --- local p_name = +-- local t_name = -- token("plain",name) -- * ( -- token("default",colon) @@ -239,11 +240,11 @@ lexer.embed_lexer(xmllexer, xmlscriptlexer, token("command", openscript), toke -- ) -- + token("keyword",name) -local p_name = -- more robust +local t_name = -- more robust token("plain",name * colon)^-1 * token("keyword",name) --- local p_key = +-- local t_key = -- token("plain",name) -- * ( -- token("default",colon) @@ -251,81 +252,82 @@ local p_name = -- more robust -- ) -- + token("constant",name) -local p_key = +local t_key = token("plain",name * colon)^-1 * token("constant",name) -local p_attributes = ( - p_optionalwhitespace - * p_key - * p_optionalwhitespace +local t_attributes = ( + t_optionalwhitespace + * t_key + * t_optionalwhitespace * token("plain",equal) - * p_optionalwhitespace - * (p_dstring + p_sstring) - * p_optionalwhitespace + * t_optionalwhitespace + * (t_dstring + t_sstring) + * t_optionalwhitespace )^0 -local p_open = +local t_open = token("keyword",openbegin) * ( - p_name - * p_optionalwhitespace - * p_attributes + t_name + * t_optionalwhitespace + * t_attributes * token("keyword",closebegin) + token("error",(1-closebegin)^1) ) -local p_close = +local t_close = token("keyword",openend) * ( - p_name - * p_optionalwhitespace + t_name + * t_optionalwhitespace * token("keyword",closeend) + token("error",(1-closeend)^1) ) -local p_entity = +local t_entity = token("constant",entity) -local p_instruction = +local t_instruction = token("command",openinstruction * P("xml")) - * p_optionalwhitespace - * p_attributes - * p_optionalwhitespace + * t_optionalwhitespace + * t_attributes + * t_optionalwhitespace * token("command",closeinstruction) + token("command",openinstruction * name) * token("default",(1-closeinstruction)^1) * token("command",closeinstruction) -local p_invisible = +local t_invisible = token("invisible",invisibles^1) --- local p_preamble = --- token('preamble', p_preamble ) +-- local t_preamble = +-- token("preamble", t_preamble ) xmllexer._rules = { - { "whitespace", p_spacing }, - { "preamble", p_preamble }, - { "word", p_word }, - -- { "text", p_text }, - -- { "comment", p_comment }, - -- { "cdata", p_cdata }, - { "doctype", p_doctype }, - { "instruction", p_instruction }, - { "close", p_close }, - { "open", p_open }, - { "entity", p_entity }, - { "invisible", p_invisible }, - { "rest", p_rest }, + { "whitespace", t_spacing }, + { "preamble", t_preamble }, + { "word", t_word }, + -- { "text", t_text }, + -- { "comment", t_comment }, + -- { "cdata", t_cdata }, + { "doctype", t_doctype }, + { "instruction", t_instruction }, + { "close", t_close }, + { "open", t_open }, + { "entity", t_entity }, + { "invisible", t_invisible }, + { "rest", t_rest }, } xmllexer._tokenstyles = context.styleset xmllexer._foldpattern = P("</") + P("<") + P("/>") -- separate entry else interference ++ P("<!--") + P("-->") -xmllexer._foldsymbols = { -- somehow doesn't work yet +xmllexer._foldsymbols = { _patterns = { "</", "/>", @@ -336,6 +338,13 @@ xmllexer._foldsymbols = { -- somehow doesn't work yet ["/>"] = -1, ["<"] = 1, }, + ["command"] = { + ["</"] = -1, + ["/>"] = -1, + ["<!--"] = 1, + ["-->"] = -1, + ["<"] = 1, + }, } return xmllexer diff --git a/context/data/scite/context/lexers/scite-context-lexer.lua b/context/data/scite/context/lexers/scite-context-lexer.lua new file mode 100644 index 000000000..6335af911 --- /dev/null +++ b/context/data/scite/context/lexers/scite-context-lexer.lua @@ -0,0 +1,2018 @@ +local info = { + version = 1.400, + comment = "basics for scintilla lpeg lexer for context/metafun", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", + comment = "contains copyrighted code from mitchell.att.foicica.com", + +} + +-- todo: hook into context resolver etc +-- todo: only old api in lexers, rest in context subnamespace +-- todo: make sure we can run in one state .. copies or shared? +-- todo: auto-nesting + +local log = false +local trace = false +local detail = false +local show = false -- nice for tracing (also for later) +local collapse = false -- can save some 15% (maybe easier on scintilla) +local inspect = false -- can save some 15% (maybe easier on scintilla) + +-- local log = true +-- local trace = true + +-- GET GOING +-- +-- You need to copy this file over lexer.lua. In principle other lexers could +-- work too but not now. Maybe some day. All patterns will move into the patterns +-- name space. I might do the same with styles. If you run an older version of +-- SciTE you can take one of the archives. Pre 3.41 versions can just be copied +-- to the right path, as there we still use part of the normal lexer. +-- +-- REMARK +-- +-- We started using lpeg lexing as soon as it came available. Because we had +-- rather demanding files an dalso wanted to use nested lexers, we ended up with +-- our own variant (more robust and faster). As a consequence successive versions +-- had to be adapted to changes in the (still unstable) api. In addition to +-- lexing we also have spell checking and such. +-- +-- STATUS +-- +-- todo: maybe use a special stripped version of the dll (stable api) +-- todo: play with hotspot and other properties +-- wish: access to all scite properties and in fact integrate in scite +-- todo: add proper tracing and so .. not too hard as we can run on mtxrun +-- todo: get rid of these lexers.STYLE_XX and lexers.XX (hide such details) +-- +-- HISTORY +-- +-- The fold and lex functions are copied and patched from original code by Mitchell +-- (see lexer.lua). All errors are mine. The ability to use lpeg is a real nice +-- adition and a brilliant move. The code is a byproduct of the (mainly Lua based) +-- textadept (still a rapidly moving target) that unfortunately misses a realtime +-- output pane. On the other hand, SciTE is somewhat crippled by the fact that we +-- cannot pop in our own (language dependent) lexer into the output pane (somehow +-- the errorlist lexer is hard coded into the editor). Hopefully that will change +-- some day. +-- +-- Starting with SciTE version 3.20 there is an issue with coloring. As we still +-- lack a connection with SciTE itself (properties as well as printing to the log +-- pane) and we cannot trace this (on windows). As far as I can see, there are no +-- fundamental changes in lexer.lua or LexLPeg.cxx so it must be in Scintilla +-- itself. So for the moment I stick to 3.10. Indicators are: no lexing of 'next' +-- and 'goto <label>' in the Lua lexer and no brace highlighting either. Interesting +-- is that it does work ok in the cld lexer (so the Lua code is okay). Also the fact +-- that char-def.lua lexes fast is a signal that the lexer quits somewhere halfway. +-- Maybe there are some hard coded limitations on the amount of styles and/or length +-- if names. +-- +-- After checking 3.24 and adapting to the new lexer tables things are okay again. +-- So, this version assumes 3.24 or higher. In 3.24 we have a different token +-- result, i.e. no longer a { tag, pattern } but just two return values. I didn't +-- check other changes but will do that when I run into issues. I had optimized +-- these small tables by hashing which was more efficient but this is no longer +-- needed. For the moment we keep some of that code around as I don't know what +-- happens in future versions. +-- +-- In 3.31 another major change took place: some helper constants (maybe they're no +-- longer constants) and functions were moved into the lexer modules namespace but +-- the functions are assigned to the Lua module afterward so we cannot alias them +-- beforehand. We're probably getting close to a stable interface now. I've +-- considered making a whole copy and patch the other functions too as we need an +-- extra nesting model. However, I don't want to maintain too much. An unfortunate +-- change in 3.03 is that no longer a script can be specified. This means that +-- instead of loading the extensions via the properties file, we now need to load +-- them in our own lexers, unless of course we replace lexer.lua completely (which +-- adds another installation issue). +-- +-- Another change has been that _LEXERHOME is no longer available. It looks like +-- more and more functionality gets dropped so maybe at some point we need to ship +-- our own dll/so files. For instance, I'd like to have access to the current +-- filename and other scite properties. For instance, we could cache some info with +-- each file, if only we had knowledge of what file we're dealing with. +-- +-- For huge files folding can be pretty slow and I do have some large ones that I +-- keep open all the time. Loading is normally no ussue, unless one has remembered +-- the status and the cursor is at the last line of a 200K line file. Optimizing the +-- fold function brought down loading of char-def.lua from 14 sec => 8 sec. +-- Replacing the word_match function and optimizing the lex function gained another +-- 2+ seconds. A 6 second load is quite ok for me. The changed lexer table structure +-- (no subtables) brings loading down to a few seconds. +-- +-- When the lexer path is copied to the textadept lexer path, and the theme +-- definition to theme path (as lexer.lua), the lexer works there as well. When I +-- have time and motive I will make a proper setup file to tune the look and feel a +-- bit and associate suffixes with the context lexer. The textadept editor has a +-- nice style tracing option but lacks the tabs for selecting files that scite has. +-- It also has no integrated run that pipes to the log pane. Interesting is that the +-- jit version of textadept crashes on lexing large files (and does not feel faster +-- either; maybe a side effect of known limitations). +-- +-- Function load(lexer_name) starts with _lexers.WHITESPACE = lexer_name .. +-- '_whitespace' which means that we need to have it frozen at the moment we load +-- another lexer. Because spacing is used to revert to a parent lexer we need to +-- make sure that we load children as late as possible in order not to get the wrong +-- whitespace trigger. This took me quite a while to figure out (not being that +-- familiar with the internals). The lex and fold functions have been optimized. It +-- is a pitty that there is no proper print available. Another thing needed is a +-- default style in our own theme style definition, as otherwise we get wrong nested +-- lexers, especially if they are larger than a view. This is the hardest part of +-- getting things right. +-- +-- It's a pitty that there is no scintillua library for the OSX version of scite. +-- Even better would be to have the scintillua library as integral part of scite as +-- that way I could use OSX alongside windows and linux (depending on needs). Also +-- nice would be to have a proper interface to scite then because currently the +-- lexer is rather isolated and the lua version does not provide all standard +-- libraries. It would also be good to have lpeg support in the regular scite lua +-- extension (currently you need to pick it up from someplace else). +-- +-- With 3.41 the interface changed again so it gets time to look into the C++ code +-- and consider compiling and patching myself. Loading is more complicated not as +-- the lexer gets loaded automatically so we have little control over extending the +-- code now. After a few days trying all kind of solutions I decided to follow a +-- different approach: drop in a complete replacement. This of course means that I +-- need to keep track of even more changes (which for sure will happen) but at least +-- I get rid of interferences. The api (lexing and configuration) is simply too +-- unstable across versions. Maybe in a few years things have stabelized. (Or maybe +-- it's not really expected that one writes lexers at all.) A side effect is that I +-- now no longer will use shipped lexers but just the built-in ones. Not that it +-- matters much as the context lexers cover what I need (and I can always write +-- more). +-- +-- In fact, the transition to 3.41 was triggered by an unfateful update of Ubuntu +-- which left me with an incompatible SciTE and lexer library and updating was not +-- possible due to the lack of 64 bit libraries. We'll see what the future brings. +-- +-- Promissing is that the library now can use another Lua instance so maybe some day +-- it will get properly in SciTE and we can use more clever scripting. +-- +-- In some lexers we use embedded ones even if we could do it directly, The reason is +-- that when the end token is edited (e.g. -->), backtracking to the space before the +-- begin token (e.g. <!--) results in applying the surrounding whitespace which in +-- turn means that when the end token is edited right, backtracking doesn't go back. +-- One solution (in the dll) would be to backtrack several space categories. After all, +-- lexing is quite fast (applying the result is much slower). +-- +-- For some reason the first blob of text tends to go wrong (pdf and web). It would be +-- nice to have 'whole doc' initial lexing. Quite fishy as it makes it impossible to +-- lex the first part well (for already opened documents) because only a partial +-- text is passed. +-- +-- So, maybe I should just write this from scratch (assuming more generic usage) +-- because after all, the dll expects just tables, based on a string. I can then also +-- do some more aggressive resource sharing (needed when used generic). +-- +-- I think that nested lexers are still bugged (esp over longer ranges). It never was +-- robust or maybe it's simply not meant for too complex cases. The 3.24 version was +-- probably the best so far. The fact that styles bleed between lexers even if their +-- states are isolated is an issue. Another issus is that zero characters in the +-- text passed to the lexer can mess things up (pdf files have them in streams). +-- +-- For more complex 'languages', like web or xml, we need to make sure that we use +-- e.g. 'default' for spacing that makes up some construct. Ok, we then still have a +-- backtracking issue but less. +-- +-- TODO +-- +-- I can make an export to context, but first I'll redo the code that makes the grammar, +-- as we only seem to need +-- +-- lexer._TOKENSTYLES : table +-- lexer._CHILDREN : flag +-- lexer._EXTRASTYLES : table +-- lexer._GRAMMAR : flag +-- +-- lexers.load : function +-- lexers.lex : function +-- +-- So, if we drop compatibility with other lex definitions, we can make things simpler. + +-- TRACING +-- +-- The advantage is that we now can check more easily with regular Lua. We can also +-- use wine and print to the console (somehow stdout is intercepted there.) So, I've +-- added a bit of tracing. Interesting is to notice that each document gets its own +-- instance which has advantages but also means that when we are spellchecking we +-- reload the word lists each time. (In the past I assumed a shared instance and took +-- some precautions.) + +local lpeg = require("lpeg") + +local global = _G +local find, gmatch, match, lower, upper, gsub, sub, format = string.find, string.gmatch, string.match, string.lower, string.upper, string.gsub, string.sub, string.format +local concat = table.concat +local type, next, setmetatable, rawset, tonumber, tostring = type, next, setmetatable, rawset, tonumber, tostring +local R, P, S, V, C, Cp, Cs, Ct, Cmt, Cc, Cf, Cg, Carg = lpeg.R, lpeg.P, lpeg.S, lpeg.V, lpeg.C, lpeg.Cp, lpeg.Cs, lpeg.Ct, lpeg.Cmt, lpeg.Cc, lpeg.Cf, lpeg.Cg, lpeg.Carg +local lpegmatch = lpeg.match + +local nesting = 0 + +local function report(fmt,str,...) + if log then + if str then + fmt = format(fmt,str,...) + end + print(format("scite lpeg lexer > %s > %s",nesting == 0 and "-" or nesting,fmt)) + end +end + +local function inform(...) + if log and trace then + report(...) + end +end + +inform("loading context lexer module (global table: %s)",tostring(global)) + +if not package.searchpath then + + -- Unfortunately the io library is only available when we end up + -- in this branch of code. + + inform("using adapted function 'package.searchpath' (if used at all)") + + function package.searchpath(name,path) + local tried = { } + for part in gmatch(path,"[^;]+") do + local filename = gsub(part,"%?",name) + local f = io.open(filename,"r") + if f then + inform("file found on path: %s",filename) + f:close() + return filename + end + tried[#tried + 1] = format("no file '%s'",filename) + end + -- added: local path .. for testing + local f = io.open(filename,"r") + if f then + inform("file found on current path: %s",filename) + f:close() + return filename + end + -- + tried[#tried + 1] = format("no file '%s'",filename) + return nil, concat(tried,"\n") + end + +end + +local lexers = { } +local context = { } +lexers.context = context + +local patterns = { } +context.patterns = patterns -- todo: lexers.patterns + +context.report = report +context.inform = inform + +lexers.LEXERPATH = package.path -- can be multiple paths separated by ; +lexers.LEXERPATH = "./?.lua" -- good enough, will be set anyway (was + +if resolvers then + -- todo: set LEXERPATH + -- todo: set report +end + +local usedlexers = { } +local parent_lexer = nil + +-- The problem with styles is that there is some nasty interaction with scintilla +-- and each version of lexer dll/so has a different issue. So, from now on we will +-- just add them here. There is also a limit on some 30 styles. Maybe I should +-- hash them in order to reuse. + +-- todo: work with proper hashes and analyze what styles are really used by a +-- lexer + +local default = { + "nothing", "whitespace", "comment", "string", "number", "keyword", + "identifier", "operator", "error", "preprocessor", "constant", "variable", + "function", "type", "label", "embedded", + "quote", "special", "extra", "reserved", "okay", "warning", + "command", "internal", "preamble", "grouping", "primitive", "plain", + "user", + -- not used (yet) .. we cross the 32 boundary so had to patch the initializer, see (1) + "char", "class", "data", "definition", "invisible", "regex", + "standout", "tag", + "text", +} + +local predefined = { + "default", "linenumber", "bracelight", "bracebad", "controlchar", + "indentguide", "calltip" +} + +-- Bah ... ugly ... nicer would be a proper hash .. we now have properties +-- as well as STYLE_* and some connection between them ... why .. ok, we +-- could delay things but who cares. Anyway, at this moment the properties +-- are still unknown. + +local function preparestyles(list) + local reverse = { } + for i=1,#list do + local k = list[i] + local K = upper(k) + local s = "style." .. k + lexers[K] = k -- is this used + lexers["STYLE_"..K] = "$(" .. k .. ")" + reverse[k] = true + end + return reverse +end + +local defaultstyles = preparestyles(default) +local predefinedstyles = preparestyles(predefined) + +-- These helpers are set afterwards so we delay their initialization ... there +-- is no need to alias each time again and this way we can more easily adapt +-- to updates. + +-- These keep changing (values, functions, tables ...) so we nee to check these +-- with each update. Some of them are set in the loader (the require 'lexer' is +-- in fact not a real one as the lexer code is loaded in the dll). It's also not +-- getting more efficient. + +-- FOLD_BASE = lexers.FOLD_BASE or SC_FOLDLEVELBASE +-- FOLD_HEADER = lexers.FOLD_HEADER or SC_FOLDLEVELHEADERFLAG +-- FOLD_BLANK = lexers.FOLD_BLANK or SC_FOLDLEVELWHITEFLAG +-- get_style_at = lexers.get_style_at or GetStyleAt +-- get_indent_amount = lexers.get_indent_amount or GetIndentAmount +-- get_property = lexers.get_property or GetProperty +-- get_fold_level = lexers.get_fold_level or GetFoldLevel + +-- It needs checking: do we have access to all properties now? I'll clean +-- this up anyway as I want a simple clean and stable model. + +-- This is somewhat messy. The lexer dll provides some virtual fields: +-- +-- + property +-- + property_int +-- + style_at +-- + fold_level +-- + indent_amount +-- +-- but for some reasons not: +-- +-- + property_expanded +-- +-- As a consequence we need to define it here because otherwise the +-- lexer will crash. The fuzzy thing is that we don't have to define +-- the property and property_int tables but we do have to define the +-- expanded beforehand. The folding properties are no longer interfaced +-- so the interface to scite is now rather weak (only a few hard coded +-- properties). + +local FOLD_BASE = 0 +local FOLD_HEADER = 0 +local FOLD_BLANK = 0 + +local style_at = { } +local indent_amount = { } +local fold_level = { } + +local function check_main_properties() + if not lexers.property then + lexers.property = { } + end + if not lexers.property_int then + lexers.property_int = setmetatable({ }, { + __index = function(t,k) + -- why the tostring .. it relies on lua casting to a number when + -- doing a comparison + return tonumber(lexers.property[k]) or 0 -- tostring removed + end, + __newindex = function(t,k,v) + report("properties are read-only, '%s' is not changed",k) + end, + }) + end +end + +lexers.property_expanded = setmetatable({ }, { + __index = function(t,k) + -- better be safe for future changes .. what if at some point this is + -- made consistent in the dll ... we need to keep an eye on that + local property = lexers.property + if not property then + check_main_properties() + end + -- + return gsub(property[k],"[$%%]%b()", function(k) + return t[sub(k,3,-2)] + end) + end, + __newindex = function(t,k,v) + report("properties are read-only, '%s' is not changed",k) + end, +}) + +-- A downward compatible feature but obsolete: + +-- local function get_property(tag,default) +-- return lexers.property_int[tag] or lexers.property[tag] or default +-- end + +-- We still want our own properties (as it keeps changing so better play +-- safe from now on): + +local function check_properties(lexer) + if lexer.properties then + return lexer + end + check_main_properties() + -- we use a proxy + local mainproperties = lexers.property + local properties = { } + local expanded = setmetatable({ }, { + __index = function(t,k) + return gsub(properties[k] or mainproperties[k],"[$%%]%b()", function(k) + return t[sub(k,3,-2)] + end) + end, + }) + lexer.properties = setmetatable(properties, { + __index = mainproperties, + __call = function(t,k,default) -- expands + local v = expanded[k] + local t = type(default) + if t == "number" then + return tonumber(v) or default + elseif t == "boolean" then + return v == nil and default or v + else + return v or default + end + end, + }) + return lexer +end + +-- do +-- lexers.property = { foo = 123, red = "R" } +-- local a = check_properties({}) print("a.foo",a.properties.foo) +-- a.properties.foo = "bar" print("a.foo",a.properties.foo) +-- a.properties.foo = "bar:$(red)" print("a.foo",a.properties.foo) print("a.foo",a.properties("foo")) +-- end + +local function set(value,default) + if value == 0 or value == false or value == "0" then + return false + elseif value == 1 or value == true or value == "1" then + return true + else + return default + end +end + +local function check_context_properties() + local property = lexers.property -- let's hope that this stays + log = set(property["lexer.context.log"], log) + trace = set(property["lexer.context.trace"], trace) + detail = set(property["lexer.context.detail"], detail) + show = set(property["lexer.context.show"], show) + collapse = set(property["lexer.context.collapse"],collapse) + inspect = set(property["lexer.context.inspect"], inspect) +end + +function context.registerproperties(p) -- global + check_main_properties() + local property = lexers.property -- let's hope that this stays + for k, v in next, p do + property[k] = v + end + check_context_properties() +end + +context.properties = setmetatable({ }, { + __index = lexers.property, + __newindex = function(t,k,v) + check_main_properties() + lexers.property[k] = v + check_context_properties() + end, +}) + +-- We want locals to we set them delayed. Once. + +local function initialize() + FOLD_BASE = lexers.FOLD_BASE + FOLD_HEADER = lexers.FOLD_HEADER + FOLD_BLANK = lexers.FOLD_BLANK + -- + style_at = lexers.style_at -- table + indent_amount = lexers.indent_amount -- table + fold_level = lexers.fold_level -- table + -- + check_main_properties() + -- + initialize = nil +end + +-- Style handler. +-- +-- The property table will be set later (after loading) by the library. The +-- styleset is not needed any more as we predefine all styles as defaults +-- anyway (too bug sensitive otherwise). + +local function toproperty(specification) + local serialized = { } + for key, value in next, specification do + if value == true then + serialized[#serialized+1] = key + elseif type(value) == "table" then + serialized[#serialized+1] = key .. ":" .. "#" .. value[1] .. value[2] .. value[3] + else + serialized[#serialized+1] = key .. ":" .. tostring(value) + end + end + return concat(serialized,",") +end + +local function tostyles(styles) + local styleset = { } + local property = lexers.property or { } + for k, v in next, styles do + v = toproperty(v) + styleset[k] = v + property["style."..k] = v + end + return styleset +end + +context.toproperty = toproperty +context.tostyles = tostyles + +local function sortedkeys(hash) + local t, n = { }, 0 + for k, v in next, hash do + t[#t+1] = k + local l = #tostring(k) + if l > n then + n = l + end + end + table.sort(t) + return t, n +end + +-- If we had one instance/state of Lua as well as all regular libraries +-- preloaded we could use the context base libraries. So, let's go poor- +-- mans solution now. + +function context.registerstyles(styles) + local styleset = tostyles(styles) + context.styles = styles + context.styleset = styleset + if trace then + if detail then + local t, n = sortedkeys(styleset) + local template = " %-" .. n .. "s : %s" + report("initializing styleset:") + for i=1,#t do + local k = t[i] + report(template,k,styleset[k]) + end + else + report("initializing styleset") + end + end +end + +-- Some spell checking related stuff. Unfortunately we cannot use a path set +-- by property. This will get a hook for resolvers. + +local locations = { + "context/lexers", -- context lexers + "context/lexers/data", -- context lexers + "../lexers", -- original lexers + "../lexers/data", -- original lexers + ".", -- whatever + "./data", -- whatever +} + +local function collect(name) + local root = gsub(lexers.LEXERPATH or ".","/.-lua$","") .. "/" -- this is a horrible hack + -- report("module '%s' locating '%s'",tostring(lexers),name) + for i=1,#locations do + local fullname = root .. locations[i] .. "/" .. name .. ".lua" -- so we can also check for .luc + if trace then + report("attempt to locate '%s'",fullname) + end + local okay, result = pcall(function () return dofile(fullname) end) + if okay then + return result, fullname + end + end +end + +function context.loadluafile(name) + local data, fullname = collect(name) + if data then + if trace then + report("lua file '%s' has been loaded",fullname) + end + return data, fullname + end + report("unable to load lua file '%s'",name) +end + +-- in fact we could share more as we probably process the data but then we need +-- to have a more advanced helper + +local cache = { } + +function context.loaddefinitions(name) + local data = cache[name] + if data then + if trace then + report("reusing definitions '%s'",name) + end + return data + elseif trace and data == false then + report("definitions '%s' were not found",name) + end + local data, fullname = collect(name) + if not data then + report("unable to load definition file '%s'",name) + data = false + elseif trace then + report("definition file '%s' has been loaded",fullname) + if detail then + local t, n = sortedkeys(data) + local template = " %-" .. n .. "s : %s" + for i=1,#t do + local k = t[i] + local v = data[k] + if type(v) ~= "table" then + report(template,k,tostring(v)) + elseif #v > 0 then + report(template,k,#v) + else + -- no need to show hash + end + end + end + end + cache[name] = data + return type(data) == "table" and data +end + +function context.word_match(words,word_chars,case_insensitive) + local chars = "%w_" -- maybe just "" when word_chars + if word_chars then + chars = "^([" .. chars .. gsub(word_chars,"([%^%]%-])", "%%%1") .."]+)" + else + chars = "^([" .. chars .."]+)" + end + if case_insensitive then + local word_list = { } + for i=1,#words do + word_list[lower(words[i])] = true + end + return P(function(input, index) + local s, e, word = find(input,chars,index) + return word and word_list[lower(word)] and e + 1 or nil + end) + else + local word_list = { } + for i=1,#words do + word_list[words[i]] = true + end + return P(function(input, index) + local s, e, word = find(input,chars,index) + return word and word_list[word] and e + 1 or nil + end) + end +end + +-- Patterns are grouped in a separate namespace but the regular lexers expect +-- shortcuts to be present in the lexers library. Maybe I'll incorporate some +-- of l-lpeg later. + +do + + local anything = P(1) + local idtoken = R("az","AZ","\127\255","__") + local digit = R("09") + local sign = S("+-") + local period = P(".") + local octdigit = R("07") + local hexdigit = R("09","AF","af") + local lower = R("az") + local upper = R("AZ") + local alpha = upper + lower + local space = S(" \n\r\t\f\v") + local eol = S("\r\n") + local backslash = P("\\") + local decimal = digit^1 + local octal = P("0") + * octdigit^1 + local hexadecimal = P("0") * S("xX") + * (hexdigit^0 * period * hexdigit^1 + hexdigit^1 * period * hexdigit^0 + hexdigit^1) + * (S("pP") * sign^-1 * hexdigit^1)^-1 -- * + + patterns.idtoken = idtoken + patterns.digit = digit + patterns.sign = sign + patterns.period = period + patterns.octdigit = octdigit + patterns.hexdigit = hexdigit + patterns.ascii = R("\000\127") -- useless + patterns.extend = R("\000\255") -- useless + patterns.control = R("\000\031") + patterns.lower = lower + patterns.upper = upper + patterns.alpha = alpha + patterns.decimal = decimal + patterns.octal = octal + patterns.hexadecimal = hexadecimal + patterns.float = sign^-1 + * (digit^0 * period * digit^1 + digit^1 * period * digit^0 + digit^1) + * S("eE") * sign^-1 * digit^1 -- * + patterns.cardinal = decimal + + patterns.signeddecimal = sign^-1 * decimal + patterns.signedoctal = sign^-1 * octal + patterns.signedhexadecimal = sign^-1 * hexadecimal + patterns.integer = sign^-1 * (hexadecimal + octal + decimal) + patterns.real = + sign^-1 * ( -- at most one + digit^1 * period * digit^0 -- 10.0 10. + + digit^0 * period * digit^1 -- 0.10 .10 + + digit^1 -- 10 + ) + + patterns.anything = anything + patterns.any = anything + patterns.restofline = (1-eol)^1 + patterns.space = space + patterns.spacing = space^1 + patterns.nospacing = (1-space)^1 + patterns.eol = eol + patterns.newline = P("\r\n") + eol + + local endof = S("\n\r\f") + + patterns.startofline = P(function(input,index) + return (index == 1 or lpegmatch(endof,input,index-1)) and index + end) + + -- These are the expected ones for other lexers. Maybe all in own namespace + -- and provide compatibility layer. or should I just remove them? + + lexers.any = anything + lexers.ascii = ascii + lexers.extend = extend + lexers.alpha = alpha + lexers.digit = digit + lexers.alnum = alnum + lexers.lower = lower + lexers.upper = upper + lexers.xdigit = hexdigit + lexers.cntrl = control + lexers.graph = R("!~") + lexers.print = R(" ~") + lexers.punct = R("!/", ":@", "[\'", "{~") + lexers.space = space + lexers.newline = S("\r\n\f")^1 + lexers.nonnewline = 1 - lexers.newline + lexers.nonnewline_esc = 1 - (lexers.newline + '\\') + backslash * anything + lexers.dec_num = decimal + lexers.oct_num = octal + lexers.hex_num = hexadecimal + lexers.integer = integer + lexers.float = float + lexers.word = (alpha + "_") * (alpha + digit + "_")^0 -- weird, why digits + +end + +-- end of patterns + +function context.exact_match(words,word_chars,case_insensitive) + local characters = concat(words) + local pattern -- the concat catches _ etc + if word_chars == true or word_chars == false or word_chars == nil then + word_chars = "" + end + if type(word_chars) == "string" then + pattern = S(characters) + patterns.idtoken + if case_insensitive then + pattern = pattern + S(upper(characters)) + S(lower(characters)) + end + if word_chars ~= "" then + pattern = pattern + S(word_chars) + end + elseif word_chars then + pattern = word_chars + end + if case_insensitive then + local list = { } + if #words == 0 then + for k, v in next, words do + list[lower(k)] = v + end + else + for i=1,#words do + list[lower(words[i])] = true + end + end + return Cmt(pattern^1, function(_,i,s) + return list[lower(s)] -- and i or nil + end) + else + local list = { } + if #words == 0 then + for k, v in next, words do + list[k] = v + end + else + for i=1,#words do + list[words[i]] = true + end + end + return Cmt(pattern^1, function(_,i,s) + return list[s] -- and i or nil + end) + end +end + +function context.just_match(words) + local p = P(words[1]) + for i=2,#words do + p = p + P(words[i]) + end + return p +end + +-- spell checking (we can only load lua files) +-- +-- return { +-- min = 3, +-- max = 40, +-- n = 12345, +-- words = { +-- ["someword"] = "someword", +-- ["anotherword"] = "Anotherword", +-- }, +-- } + +local lists = { } + +function context.setwordlist(tag,limit) -- returns hash (lowercase keys and original values) + if not tag or tag == "" then + return false, 3 + end + local list = lists[tag] + if not list then + list = context.loaddefinitions("spell-" .. tag) + if not list or type(list) ~= "table" then + report("invalid spell checking list for '%s'",tag) + list = { words = false, min = 3 } + else + list.words = list.words or false + list.min = list.min or 3 + end + lists[tag] = list + end + if trace then + report("enabling spell checking for '%s' with minimum '%s'",tag,list.min) + end + return list.words, list.min +end + +patterns.wordtoken = R("az","AZ","\127\255") +patterns.wordpattern = patterns.wordtoken^3 -- todo: if limit and #s < limit then + +function context.checkedword(validwords,validminimum,s,i) -- ,limit + if not validwords then -- or #s < validminimum then + return true, "text", i -- true, "default", i + else + -- keys are lower + local word = validwords[s] + if word == s then + return true, "okay", i -- exact match + elseif word then + return true, "warning", i -- case issue + else + local word = validwords[lower(s)] + if word == s then + return true, "okay", i -- exact match + elseif word then + return true, "warning", i -- case issue + elseif upper(s) == s then + return true, "warning", i -- probably a logo or acronym + else + return true, "error", i + end + end + end +end + +function context.styleofword(validwords,validminimum,s) -- ,limit + if not validwords or #s < validminimum then + return "text" + else + -- keys are lower + local word = validwords[s] + if word == s then + return "okay" -- exact match + elseif word then + return "warning" -- case issue + else + local word = validwords[lower(s)] + if word == s then + return "okay" -- exact match + elseif word then + return "warning" -- case issue + elseif upper(s) == s then + return "warning" -- probably a logo or acronym + else + return "error" + end + end + end +end + +-- overloaded functions + +local h_table, b_table, n_table = { }, { }, { } -- from the time small tables were used (optimization) + +setmetatable(h_table, { __index = function(t,level) local v = { level, FOLD_HEADER } t[level] = v return v end }) +setmetatable(b_table, { __index = function(t,level) local v = { level, FOLD_BLANK } t[level] = v return v end }) +setmetatable(n_table, { __index = function(t,level) local v = { level } t[level] = v return v end }) + +local newline = patterns.newline +local p_yes = Cp() * Cs((1-newline)^1) * newline^-1 +local p_nop = newline + +local folders = { } + +local function fold_by_parsing(text,start_pos,start_line,start_level,lexer) + local folder = folders[lexer] + if not folder then + -- + local pattern, folds, text, start_pos, line_num, prev_level, current_level + -- + local fold_symbols = lexer._foldsymbols + local fold_pattern = lexer._foldpattern -- use lpeg instead (context extension) + -- + if fold_pattern then + -- if no functions are found then we could have a faster one + fold_pattern = Cp() * C(fold_pattern) / function(s,match) + local symbols = fold_symbols[style_at[start_pos + s]] + if symbols then + local l = symbols[match] + if l then + current_level = current_level + l + end + end + end + local action_y = function() + folds[line_num] = prev_level + if current_level > prev_level then + folds[line_num] = prev_level + FOLD_HEADER + end + if current_level < FOLD_BASE then + current_level = FOLD_BASE + end + prev_level = current_level + line_num = line_num + 1 + end + local action_n = function() + folds[line_num] = prev_level + FOLD_BLANK + line_num = line_num + 1 + end + pattern = ((fold_pattern + (1-newline))^1 * newline / action_y + newline/action_n)^0 + + else + -- the traditional one but a bit optimized + local fold_symbols_patterns = fold_symbols._patterns + local action_y = function(pos,line) + for j = 1, #fold_symbols_patterns do + for s, match in gmatch(line,fold_symbols_patterns[j]) do -- "()(" .. patterns[i] .. ")" + local symbols = fold_symbols[style_at[start_pos + pos + s - 1]] + local l = symbols and symbols[match] + local t = type(l) + if t == "number" then + current_level = current_level + l + elseif t == "function" then + current_level = current_level + l(text, pos, line, s, match) + end + end + end + folds[line_num] = prev_level + if current_level > prev_level then + folds[line_num] = prev_level + FOLD_HEADER + end + if current_level < FOLD_BASE then + current_level = FOLD_BASE + end + prev_level = current_level + line_num = line_num + 1 + end + local action_n = function() + folds[line_num] = prev_level + FOLD_BLANK + line_num = line_num + 1 + end + pattern = (p_yes/action_y + p_nop/action_n)^0 + end + -- + local reset_parser = lexer._reset_parser + -- + folder = function(_text_,_start_pos_,_start_line_,_start_level_) + if reset_parser then + reset_parser() + end + folds = { } + text = _text_ + start_pos = _start_pos_ + line_num = _start_line_ + prev_level = _start_level_ + current_level = prev_level + lpegmatch(pattern,text) + -- make folds collectable + local t = folds + folds = nil + return t + end + folders[lexer] = folder + end + return folder(text,start_pos,start_line,start_level,lexer) +end + +local folds, current_line, prev_level + +local function action_y() + local current_level = FOLD_BASE + indent_amount[current_line] + if current_level > prev_level then -- next level + local i = current_line - 1 + local f + while true do + f = folds[i] + if not f then + break + elseif f[2] == FOLD_BLANK then + i = i - 1 + else + f[2] = FOLD_HEADER -- low indent + break + end + end + folds[current_line] = { current_level } -- high indent + elseif current_level < prev_level then -- prev level + local f = folds[current_line - 1] + if f then + f[1] = prev_level -- high indent + end + folds[current_line] = { current_level } -- low indent + else -- same level + folds[current_line] = { prev_level } + end + prev_level = current_level + current_line = current_line + 1 +end + +local function action_n() + folds[current_line] = { prev_level, FOLD_BLANK } + current_line = current_line + 1 +end + +local pattern = ( S("\t ")^0 * ( (1-patterns.eol)^1 / action_y + P(true) / action_n) * newline )^0 + +local function fold_by_indentation(text,start_pos,start_line,start_level) + -- initialize + folds = { } + current_line = start_line + prev_level = start_level + -- define + -- -- not here .. pattern binds and local functions are not frozen + -- analyze + lpegmatch(pattern,text) + -- flatten + for line, level in next, folds do + folds[line] = level[1] + (level[2] or 0) + end + -- done, make folds collectable + local t = folds + folds = nil + return t +end + +local function fold_by_line(text,start_pos,start_line,start_level) + local folds = { } + -- can also be lpeg'd + for _ in gmatch(text,".-\r?\n") do + folds[start_line] = n_table[start_level] -- { start_level } -- stile tables ? needs checking + start_line = start_line + 1 + end + return folds +end + +local threshold_by_lexer = 512 * 1024 -- we don't know the filesize yet +local threshold_by_parsing = 512 * 1024 -- we don't know the filesize yet +local threshold_by_indentation = 512 * 1024 -- we don't know the filesize yet +local threshold_by_line = 512 * 1024 -- we don't know the filesize yet + +function context.fold(lexer,text,start_pos,start_line,start_level) -- hm, we had size thresholds .. where did they go + if text == "" then + return { } + end + if initialize then + initialize() + end + local fold_by_lexer = lexer._fold + local fold_by_symbols = lexer._foldsymbols + local filesize = 0 -- we don't know that + if fold_by_lexer then + if filesize <= threshold_by_lexer then + return fold_by_lexer(text,start_pos,start_line,start_level,lexer) + end + elseif fold_by_symbols then -- and lexer.properties("fold.by.parsing",1) > 0 then + if filesize <= threshold_by_parsing then + return fold_by_parsing(text,start_pos,start_line,start_level,lexer) + end + elseif lexer.properties("fold.by.indentation",1) > 0 then + if filesize <= threshold_by_indentation then + return fold_by_indentation(text,start_pos,start_line,start_level,lexer) + end + elseif lexer.properties("fold.by.line",1) > 0 then + if filesize <= threshold_by_line then + return fold_by_line(text,start_pos,start_line,start_level,lexer) + end + end + return { } +end + +-- The following code is mostly unchanged: + +local function add_rule(lexer,id,rule) -- unchanged + if not lexer._RULES then + lexer._RULES = { } + lexer._RULEORDER = { } + end + lexer._RULES[id] = rule + lexer._RULEORDER[#lexer._RULEORDER + 1] = id +end + +-- I finally figured out that adding more styles was an issue because of several +-- reasons: +-- +-- + in old versions there was a limit in the amount, so we overran the built-in +-- hard coded scintilla range +-- + then, the add_style function didn't check for already known ones, so again +-- we had an overrun (with some magic that could be avoided) +-- + then, when I messed with a new default set I realized that there is no check +-- in initializing _TOKENSTYLES (here the inspect function helps) +-- + of course it was mostly a side effect of passing all the used styles to the +-- _tokenstyles instead of only the not-default ones but such a thing should not +-- matter (read: intercepted) +-- +-- This finally removed a head-ache and was revealed by lots of tracing, which I +-- should have built in way earlier. + +local function add_style(lexer,token_name,style) -- changed a bit around 3.41 + -- We don't add styles that are already defined as this can overflow the + -- amount possible (in old versions of scintilla). + if defaultstyles[token_name] then + if trace and detail then + report("default style '%s' is ignored as extra style",token_name) + end + return + elseif predefinedstyles[token_name] then + if trace and detail then + report("predefined style '%s' is ignored as extra style",token_name) + end + return + else + if trace and detail then + report("adding extra style '%s' as '%s'",token_name,style) + end + end + -- This is unchanged. We skip the dangerous zone. + local num_styles = lexer._numstyles + if num_styles == 32 then + num_styles = num_styles + 8 + end + if num_styles >= 255 then + report("there can't be more than %s styles",255) + end + lexer._TOKENSTYLES[token_name] = num_styles + lexer._EXTRASTYLES[token_name] = style + lexer._numstyles = num_styles + 1 +end + +local function check_styles(lexer) + -- Here we also use a check for the dangerous zone. That way we can have a + -- larger default set. The original code just assumes that #default is less + -- than the dangerous zone's start. + local numstyles = 0 + local tokenstyles = { } + for i=1, #default do + if numstyles == 32 then + numstyles = numstyles + 8 + end + tokenstyles[default[i]] = numstyles + numstyles = numstyles + 1 + end + -- Unchanged. + for i=1, #predefined do + tokenstyles[predefined[i]] = i + 31 + end + lexer._TOKENSTYLES = tokenstyles + lexer._numstyles = numstyles + lexer._EXTRASTYLES = { } + return lexer +end + +-- At some point an 'any' append showed up in the original code ... +-- but I see no need to catch that case ... beter fix the specification. +-- +-- hm, why are many joined twice + +local function join_tokens(lexer) -- slightly different from the original (no 'any' append) + local patterns = lexer._RULES + local order = lexer._RULEORDER + -- report("lexer: %s, tokens: %s",lexer._NAME,table.concat(order," + ")) + if patterns and order then + local token_rule = patterns[order[1]] -- normally whitespace + for i=2,#order do + token_rule = token_rule + patterns[order[i]] + end + if lexer._TYPE ~= "context" then + token_rule = token_rule + lexers.token(lexers.DEFAULT, patterns.any) + end + lexer._TOKENRULE = token_rule + return token_rule + else + return P(1) + end +end + +local function add_lexer(grammar, lexer) -- mostly the same as the original + local token_rule = join_tokens(lexer) + local lexer_name = lexer._NAME + local children = lexer._CHILDREN + for i=1,#children do + local child = children[i] + if child._CHILDREN then + add_lexer(grammar, child) + end + local child_name = child._NAME + local rules = child._EMBEDDEDRULES[lexer_name] + local rules_token_rule = grammar["__" .. child_name] or rules.token_rule + local pattern = (-rules.end_rule * rules_token_rule)^0 * rules.end_rule^-1 + grammar[child_name] = pattern * V(lexer_name) + local embedded_child = "_" .. child_name + grammar[embedded_child] = rules.start_rule * pattern + token_rule = V(embedded_child) + token_rule + end + if trace then + report("adding lexer '%s' with %s children",lexer_name,#children) + end + grammar["__" .. lexer_name] = token_rule + grammar[lexer_name] = token_rule^0 +end + +local function build_grammar(lexer,initial_rule) -- same as the original + local children = lexer._CHILDREN + local lexer_name = lexer._NAME + if children then + if not initial_rule then + initial_rule = lexer_name + end + local grammar = { initial_rule } + add_lexer(grammar, lexer) + lexer._INITIALRULE = initial_rule + lexer._GRAMMAR = Ct(P(grammar)) + if trace then + report("building grammar for '%s' with whitespace '%s'and %s children",lexer_name,lexer.whitespace or "?",#children) + end + else + lexer._GRAMMAR = Ct(join_tokens(lexer)^0) + if trace then + report("building grammar for '%s' with whitespace '%s'",lexer_name,lexer.whitespace or "?") + end + end +end + +-- So far. We need these local functions in the next one. + +local lineparsers = { } + +local maxmatched = 100 + +local function collapsed(t) + local lasttoken = nil + local lastindex = nil + for i=1,#t,2 do + local token = t[i] + local position = t[i+1] + if token == lasttoken then + t[lastindex] = position + elseif lastindex then + lastindex = lastindex + 1 + t[lastindex] = token + lastindex = lastindex + 1 + t[lastindex] = position + lasttoken = token + else + lastindex = i+1 + lasttoken = token + end + end + for i=#t,lastindex+1,-1 do + t[i] = nil + end + return t +end + +local function matched(lexer,grammar,text) + -- text = string.gsub(text,"\z","!") + local t = lpegmatch(grammar,text) + if trace then + if show then + report("output of lexer: %s (max %s entries)",lexer._NAME,maxmatched) + local s = lexer._TOKENSTYLES + local p = 1 + for i=1,2*maxmatched,2 do + local n = i + 1 + local ti = t[i] + local tn = t[n] + if ti then + local txt = sub(text,p,tn-1) + if txt then + txt = gsub(txt,"[%s]"," ") + else + txt = "!no text!" + end + report("%4i : %s > %s (%s) (%s)",n/2,ti,tn,s[ti] or "!unset!",txt) + p = tn + else + break + end + end + end + report("lexer results: %s, length: %s, ranges: %s",lexer._NAME,#text,#t/2) + if collapse then + t = collapsed(t) + report("lexer collapsed: %s, length: %s, ranges: %s",lexer._NAME,#text,#t/2) + end + elseif collapse then + t = collapsed(t) + end + return t +end + +-- Todo: make nice generic lexer (extra argument with start/stop commands) for +-- context itself. + +function context.lex(lexer,text,init_style) + -- local lexer = global._LEXER + local grammar = lexer._GRAMMAR + if initialize then + initialize() + end + if not grammar then + return { } + elseif lexer._LEXBYLINE then -- we could keep token + local tokens = { } + local offset = 0 + local noftokens = 0 + local lineparser = lineparsers[lexer] + if not lineparser then -- probably a cmt is more efficient + lineparser = C((1-newline)^0 * newline) / function(line) + local length = #line + local line_tokens = length > 0 and lpegmatch(grammar,line) + if line_tokens then + for i=1,#line_tokens,2 do + noftokens = noftokens + 1 + tokens[noftokens] = line_tokens[i] + noftokens = noftokens + 1 + tokens[noftokens] = line_tokens[i + 1] + offset + end + end + offset = offset + length + if noftokens > 0 and tokens[noftokens] ~= offset then + noftokens = noftokens + 1 + tokens[noftokens] = "default" + noftokens = noftokens + 1 + tokens[noftokens] = offset + 1 + end + end + lineparser = lineparser^0 + lineparsers[lexer] = lineparser + end + lpegmatch(lineparser,text) + return tokens + elseif lexer._CHILDREN then + local hash = lexer._HASH -- hm, was _hash + if not hash then + hash = { } + lexer._HASH = hash + end + grammar = hash[init_style] + if grammar then + lexer._GRAMMAR = grammar + -- lexer._GRAMMAR = lexer._GRAMMAR or grammar + else + for style, style_num in next, lexer._TOKENSTYLES do + if style_num == init_style then + -- the name of the lexers is filtered from the whitespace + -- specification .. weird code, should be a reverse hash + local lexer_name = match(style,"^(.+)_whitespace") or lexer._NAME + if lexer._INITIALRULE ~= lexer_name then + grammar = hash[lexer_name] + if not grammar then + build_grammar(lexer,lexer_name) + grammar = lexer._GRAMMAR + hash[lexer_name] = grammar + end + end + break + end + end + grammar = grammar or lexer._GRAMMAR + hash[init_style] = grammar + end + if trace then + report("lexing '%s' with initial style '%s' and %s children",lexer._NAME,#lexer._CHILDREN or 0,init_style) + end + return matched(lexer,grammar,text) + else + if trace then + report("lexing '%s' with initial style '%s'",lexer._NAME,init_style) + end + return matched(lexer,grammar,text) + end +end + +-- hm, changed in 3.24 .. no longer small table but one table: + +function context.token(name, patt) + return patt * Cc(name) * Cp() +end + +-- The next ones were mostly unchanged (till now), we moved it here when 3.41 +-- became close to impossible to combine with cq. overload and a merge was +-- the only solution. It makes later updates more painful but the update to +-- 3.41 was already a bit of a nightmare anyway. + +-- Loading lexers is rather interwoven with what the dll/so sets and +-- it changes over time. So, we need to keep an eye on changes. One +-- problem that we always faced were the limitations in length of +-- lexer names (as they get app/prepended occasionally to strings with +-- a hard coded limit). So, we always used alternative names and now need +-- to make sure this doesn't clash. As I no longer intend to use shipped +-- lexers I could strip away some of the code in the future, but keeping +-- it as reference makes sense. + +-- I spend quite some time figuring out why 3.41 didn't work or crashed which +-- is hard when no stdout is available and when the io library is absent. In +-- the end of of the problems was in the _NAME setting. We set _NAME +-- to e.g. 'tex' but load from a file with a longer name, which we do +-- as we don't want to clash with existing files, we end up in +-- lexers not being found. + +local whitespaces = { } + +local function push_whitespace(name) + table.insert(whitespaces,lexers.WHITESPACE or "whitespace") + lexers.WHITESPACE = name .. "_whitespace" +end + +local function pop_whitespace() + lexers.WHITESPACE = table.remove(whitespaces) or "whitespace" +end + +local function check_whitespace(lexer,name) + if lexer then + lexer.whitespace = (name or lexer.name or lexer._NAME) .. "_whitespace" + end +end + +function context.new(name,filename) + local lexer = { + _TYPE = "context", + -- + _NAME = name, -- used for token building + _FILENAME = filename, -- for diagnostic purposed + -- + name = name, + filename = filename, + } + if trace then + report("initializing lexer tagged '%s' from file '%s'",name,filename or name) + end + check_whitespace(lexer) + check_styles(lexer) + check_properties(lexer) + return lexer +end + +local function nolexer(name) + local lexer = { + _TYPE = "unset", + _NAME = name, + -- _rules = { }, + } + check_styles(lexer) + check_whitespace(lexer) + check_properties(lexer) + return lexer +end + +local function load_lexer(name,namespace) + if trace then + report("loading lexer file '%s'",name) + end + push_whitespace(namespace or name) -- for traditional lexers .. no alt_name yet + local lexer, fullname = context.loadluafile(name) + pop_whitespace() + if not lexer then + report("invalid lexer file '%s'",name) + elseif trace then + report("lexer file '%s' has been loaded",fullname) + end + if type(lexer) ~= "table" then + if trace then + report("lexer file '%s' gets a dummy lexer",name) + end + return nolexer(name) + end + if lexer._TYPE ~= "context" then + lexer._TYPE = "native" + check_styles(lexer) + check_whitespace(lexer,namespace or name) + check_properties(lexer) + end + if not lexer._NAME then + lexer._NAME = name -- so: filename + end + if name ~= namespace then + lexer._NAME = namespace + end + return lexer +end + +-- tracing ... + +local function inspect_lexer(lexer,level) + -- If we had the regular libs available I could use the usual + -- helpers. + local parent = lexer._lexer + lexer._lexer = nil -- prevent endless recursion + local name = lexer._NAME + local function showstyles_1(tag,styles) + local numbers = { } + for k, v in next, styles do + numbers[v] = k + end + -- sort by number and make number hash too + local keys = sortedkeys(numbers) + for i=1,#keys do + local k = keys[i] + local v = numbers[k] + report("[%s %s] %s %s = %s",level,name,tag,k,v) + end + end + local function showstyles_2(tag,styles) + local keys = sortedkeys(styles) + for i=1,#keys do + local k = keys[i] + local v = styles[k] + report("[%s %s] %s %s = %s",level,name,tag,k,v) + end + end + local keys = sortedkeys(lexer) + for i=1,#keys do + local k = keys[i] + local v = lexer[k] + report("[%s %s] root key : %s = %s",level,name,k,tostring(v)) + end + showstyles_1("token style",lexer._TOKENSTYLES) + showstyles_2("extra style",lexer._EXTRASTYLES) + local children = lexer._CHILDREN + if children then + for i=1,#children do + inspect_lexer(children[i],level+1) + end + end + lexer._lexer = parent +end + +function context.inspect(lexer) + inspect_lexer(lexer,0) +end + +-- An optional second argument has been introduced so that one can embed a lexer +-- more than once ... maybe something to look into (as not it's done by remembering +-- the start sequence ... quite okay but maybe suboptimal ... anyway, never change +-- a working solution). + +-- namespace can be automatic: if parent then use name of parent (chain) + +function context.loadlexer(filename,namespace) + nesting = nesting + 1 + if not namespace then + namespace = filename + end + local lexer = usedlexers[namespace] -- we load by filename but the internal name can be short + if lexer then + if trace then + report("reusing lexer '%s'",namespace) + end + nesting = nesting - 1 + return lexer + elseif trace then + report("loading lexer '%s'",namespace) + end + -- + if initialize then + initialize() + end + -- + parent_lexer = nil + -- + lexer = load_lexer(filename,namespace) or nolexer(filename,namespace) + usedlexers[filename] = lexer + -- + if not lexer._rules and not lexer._lexer then + lexer._lexer = parent_lexer + end + -- + if lexer._lexer then + local _l = lexer._lexer + local _r = lexer._rules + local _s = lexer._tokenstyles + if not _l._tokenstyles then + _l._tokenstyles = { } + end + if _r then + local rules = _l._rules + local name = lexer.name + for i=1,#_r do + local rule = _r[i] + rules[#rules + 1] = { + name .. "_" .. rule[1], + rule[2], + } + end + end + if _s then + local tokenstyles = _l._tokenstyles + for token, style in next, _s do + tokenstyles[token] = style + end + end + lexer = _l + end + -- + local _r = lexer._rules + if _r then + local _s = lexer._tokenstyles + if _s then + for token, style in next, _s do + add_style(lexer, token, style) + end + end + for i=1,#_r do + local rule = _r[i] + add_rule(lexer, rule[1], rule[2]) + end + build_grammar(lexer) + end + -- + add_style(lexer, lexer.whitespace, lexers.STYLE_WHITESPACE) + -- + local foldsymbols = lexer._foldsymbols + if foldsymbols then + local patterns = foldsymbols._patterns + if patterns then + for i = 1, #patterns do + patterns[i] = "()(" .. patterns[i] .. ")" + end + end + end + -- + lexer.lex = lexers.lex + lexer.fold = lexers.fold + -- + nesting = nesting - 1 + -- + if inspect then + context.inspect(lexer) + end + -- + return lexer +end + +function context.embed_lexer(parent, child, start_rule, end_rule) -- mostly the same as the original + local embeddedrules = child._EMBEDDEDRULES + if not embeddedrules then + embeddedrules = { } + child._EMBEDDEDRULES = embeddedrules + end + if not child._RULES then + local rules = child._rules + if not rules then + report("child lexer '%s' has no rules",child._NAME or "unknown") + rules = { } + child._rules = rules + end + for i=1,#rules do + local rule = rules[i] + add_rule(child, rule[1], rule[2]) + end + end + embeddedrules[parent._NAME] = { + ["start_rule"] = start_rule, + ["token_rule"] = join_tokens(child), + ["end_rule"] = end_rule + } + local children = parent._CHILDREN + if not children then + children = { } + parent._CHILDREN = children + end + children[#children + 1] = child + local tokenstyles = parent._tokenstyles + if not tokenstyles then + tokenstyles = { } + parent._tokenstyles = tokenstyles + end + local childname = child._NAME + local whitespace = childname .. "_whitespace" + tokenstyles[whitespace] = lexers.STYLE_WHITESPACE -- all these STYLE_THINGS will go .. just a proper hash + if trace then + report("using whitespace '%s' as trigger for '%s' with property '%s'",whitespace,childname,lexers.STYLE_WHITESPACE) + end + local childstyles = child._tokenstyles + if childstyles then + for token, style in next, childstyles do + tokenstyles[token] = style + end + end + child._lexer = parent + parent_lexer = parent +end + +-- we now move the adapted code to the lexers namespace + +lexers.new = context.new +lexers.load = context.loadlexer +------.loadlexer = context.loadlexer +lexers.loadluafile = context.loadluafile +lexers.embed_lexer = context.embed_lexer +lexers.fold = context.fold +lexers.lex = context.lex +lexers.token = context.token +lexers.word_match = context.word_match +lexers.exact_match = context.exact_match +lexers.just_match = context.just_match +lexers.inspect = context.inspect +lexers.report = context.report +lexers.inform = context.inform + +-- helper .. alas ... the lexer's lua instance is rather crippled .. not even +-- math is part of it + +do + + local floor = math and math.floor + local char = string.char + + if not floor then + + floor = function(n) + return tonumber(format("%d",n)) + end + + math = math or { } + + math.floor = floor + + end + + local function utfchar(n) + if n < 0x80 then + return char(n) + elseif n < 0x800 then + return char( + 0xC0 + floor(n/0x40), + 0x80 + (n % 0x40) + ) + elseif n < 0x10000 then + return char( + 0xE0 + floor(n/0x1000), + 0x80 + (floor(n/0x40) % 0x40), + 0x80 + (n % 0x40) + ) + elseif n < 0x40000 then + return char( + 0xF0 + floor(n/0x40000), + 0x80 + floor(n/0x1000), + 0x80 + (floor(n/0x40) % 0x40), + 0x80 + (n % 0x40) + ) + else + -- return char( + -- 0xF1 + floor(n/0x1000000), + -- 0x80 + floor(n/0x40000), + -- 0x80 + floor(n/0x1000), + -- 0x80 + (floor(n/0x40) % 0x40), + -- 0x80 + (n % 0x40) + -- ) + return "?" + end + end + + context.utfchar = utfchar + + -- a helper from l-lpeg: + + local function make(t) + local p + for k, v in next, t do + if not p then + if next(v) then + p = P(k) * make(v) + else + p = P(k) + end + else + if next(v) then + p = p + P(k) * make(v) + else + p = p + P(k) + end + end + end + return p + end + + function lpeg.utfchartabletopattern(list) + local tree = { } + for i=1,#list do + local t = tree + for c in gmatch(list[i],".") do + if not t[c] then + t[c] = { } + end + t = t[c] + end + end + return make(tree) + end + + patterns.invisibles = lpeg.utfchartabletopattern { + utfchar(0x00A0), -- nbsp + utfchar(0x2000), -- enquad + utfchar(0x2001), -- emquad + utfchar(0x2002), -- enspace + utfchar(0x2003), -- emspace + utfchar(0x2004), -- threeperemspace + utfchar(0x2005), -- fourperemspace + utfchar(0x2006), -- sixperemspace + utfchar(0x2007), -- figurespace + utfchar(0x2008), -- punctuationspace + utfchar(0x2009), -- breakablethinspace + utfchar(0x200A), -- hairspace + utfchar(0x200B), -- zerowidthspace + utfchar(0x202F), -- narrownobreakspace + utfchar(0x205F), -- math thinspace + } + + -- now we can make: + + patterns.iwordtoken = patterns.wordtoken - patterns.invisibles + patterns.iwordpattern = patterns.iwordtoken^3 + +end + +-- The following helpers are not used, partyally replace by other mechanism and +-- when needed I'll first optimize them. I only made them somewhat more readable. + +function lexers.delimited_range(chars, single_line, no_escape, balanced) -- unchanged + local s = sub(chars,1,1) + local e = #chars == 2 and sub(chars,2,2) or s + local range + local b = balanced and s or "" + local n = single_line and "\n" or "" + if no_escape then + local invalid = S(e .. n .. b) + range = patterns.any - invalid + else + local invalid = S(e .. n .. b) + patterns.backslash + range = patterns.any - invalid + patterns.backslash * patterns.any + end + if balanced and s ~= e then + return P { + s * (range + V(1))^0 * e + } + else + return s * range^0 * P(e)^-1 + end +end + +function lexers.starts_line(patt) -- unchanged + return P ( function(input, index) + if index == 1 then + return index + end + local char = sub(input,index - 1,index - 1) + if char == "\n" or char == "\r" or char == "\f" then + return index + end + end ) * patt +end + +function lexers.last_char_includes(s) -- unchanged + s = "[" .. gsub(s,"[-%%%[]", "%%%1") .. "]" + return P ( function(input, index) + if index == 1 then + return index + end + local i = index + while match(sub(input,i - 1,i - 1),"[ \t\r\n\f]") do + i = i - 1 + end + if match(sub(input,i - 1,i - 1),s) then + return index + end + end) +end + +function lexers.nested_pair(start_chars, end_chars) -- unchanged + local s = start_chars + local e = P(end_chars)^-1 + return P { + s * (patterns.any - s - end_chars + V(1))^0 * e + } +end + +local function prev_line_is_comment(prefix, text, pos, line, s) -- unchanged + local start = find(line,"%S") + if start < s and not find(line,prefix,start,true) then + return false + end + local p = pos - 1 + if sub(text,p,p) == "\n" then + p = p - 1 + if sub(text,p,p) == "\r" then + p = p - 1 + end + if sub(text,p,p) ~= "\n" then + while p > 1 and sub(text,p - 1,p - 1) ~= "\n" + do p = p - 1 + end + while find(sub(text,p,p),"^[\t ]$") do + p = p + 1 + end + return sub(text,p,p + #prefix - 1) == prefix + end + end + return false +end + +local function next_line_is_comment(prefix, text, pos, line, s) + local p = find(text,"\n",pos + s) + if p then + p = p + 1 + while find(sub(text,p,p),"^[\t ]$") do + p = p + 1 + end + return sub(text,p,p + #prefix - 1) == prefix + end + return false +end + +function lexers.fold_line_comments(prefix) + local property_int = lexers.property_int + return function(text, pos, line, s) + if property_int["fold.line.comments"] == 0 then + return 0 + end + if s > 1 and match(line,"^%s*()") < s then + return 0 + end + local prev_line_comment = prev_line_is_comment(prefix, text, pos, line, s) + local next_line_comment = next_line_is_comment(prefix, text, pos, line, s) + if not prev_line_comment and next_line_comment then + return 1 + end + if prev_line_comment and not next_line_comment then + return -1 + end + return 0 + end +end + +-- done + +return lexers diff --git a/context/data/scite/context/lexers/themes/scite-context-theme.lua b/context/data/scite/context/lexers/themes/scite-context-theme.lua new file mode 100644 index 000000000..b0c63fe39 --- /dev/null +++ b/context/data/scite/context/lexers/themes/scite-context-theme.lua @@ -0,0 +1,150 @@ +local info = { + version = 1.002, + comment = "theme for scintilla lpeg lexer for context/metafun", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +-- context_path = string.split(os.resultof("mtxrun --find-file context.mkiv"))[1] or "" + +-- What used to be proper Lua definitions are in 3.42 SciTE properties although +-- integration is still somewhat half. Also, the indexed style specification is +-- now a hash (which indeed makes more sense). However, the question is: am I +-- going to rewrite the style bit? It anyway makes more sense to keep this file +-- somewhat neutral as we no longer need to be compatible. However, we cannot be +-- sure of helpers being present yet when this file is loaded, so we are somewhat +-- crippled. On the other hand, I don't see other schemes being used with the +-- context lexers. + +-- The next kludge is no longer needed which is good! +-- +-- if GTK then -- WIN32 GTK OSX CURSES +-- font_name = '!' .. font_name +-- end + +-- I need to play with these, some work ok: +-- +-- eolfilled noteolfilled +-- characterset:u|l +-- visible notvisible +-- changeable notchangeable (this way we can protect styles, e.g. preamble?) +-- hotspot nothotspot + +local font_name = 'Dejavu Sans Mono' +local font_size = '14' + +local colors = { + red = { '7F', '00', '00' }, + green = { '00', '7F', '00' }, + blue = { '00', '00', '7F' }, + cyan = { '00', '7F', '7F' }, + magenta = { '7F', '00', '7F' }, + yellow = { '7F', '7F', '00' }, + orange = { 'B0', '7F', '00' }, + -- + white = { 'FF', 'FF', 'FF' }, + light = { 'CF', 'CF', 'CF' }, + grey = { '80', '80', '80' }, + dark = { '4F', '4F', '4F' }, + black = { '00', '00', '00' }, + -- + selection = { 'F7', 'F7', 'F7' }, + logpanel = { 'E7', 'E7', 'E7' }, + textpanel = { 'CF', 'CF', 'CF' }, + linepanel = { 'A7', 'A7', 'A7' }, + tippanel = { '44', '44', '44' }, + -- + right = { '00', '00', 'FF' }, + wrong = { 'FF', '00', '00' }, +} + +local styles = { + + ["whitespace"] = { }, + ["default"] = { font = font_name, size = font_size, fore = colors.black, back = colors.textpanel }, + ["default"] = { font = font_name, size = font_size, fore = colors.black }, + ["number"] = { fore = colors.cyan }, + ["comment"] = { fore = colors.yellow }, + ["keyword"] = { fore = colors.blue, bold = true }, + ["string"] = { fore = colors.magenta }, + -- ["preproc"] = { fore = colors.yellow, bold = true }, + ["error"] = { fore = colors.red }, + ["label"] = { fore = colors.red, bold = true }, + + ["nothing"] = { }, + ["class"] = { fore = colors.black, bold = true }, + ["function"] = { fore = colors.black, bold = true }, + ["constant"] = { fore = colors.cyan, bold = true }, + ["operator"] = { fore = colors.blue }, + ["regex"] = { fore = colors.magenta }, + ["preprocessor"] = { fore = colors.yellow, bold = true }, + ["tag"] = { fore = colors.cyan }, + ["type"] = { fore = colors.blue }, + ["variable"] = { fore = colors.black }, + ["identifier"] = { }, + + ["linenumber"] = { back = colors.linepanel }, + ["bracelight"] = { fore = colors.orange, bold = true }, + ["bracebad"] = { fore = colors.orange, bold = true }, + ["controlchar"] = { }, + ["indentguide"] = { fore = colors.linepanel, back = colors.white }, + ["calltip"] = { fore = colors.white, back = colors.tippanel }, + + ["invisible"] = { back = colors.orange }, + ["quote"] = { fore = colors.blue, bold = true }, + ["special"] = { fore = colors.blue }, + ["extra"] = { fore = colors.yellow }, + ["embedded"] = { fore = colors.black, bold = true }, + ["char"] = { fore = colors.magenta }, + ["reserved"] = { fore = colors.magenta, bold = true }, + ["definition"] = { fore = colors.black, bold = true }, + ["okay"] = { fore = colors.dark }, + ["warning"] = { fore = colors.orange }, + ["standout"] = { fore = colors.orange, bold = true }, + ["command"] = { fore = colors.green, bold = true }, + ["internal"] = { fore = colors.orange, bold = true }, + ["preamble"] = { fore = colors.yellow }, + ["grouping"] = { fore = colors.red }, + ["primitive"] = { fore = colors.blue, bold = true }, + ["plain"] = { fore = colors.dark, bold = true }, + ["user"] = { fore = colors.green }, + ["data"] = { fore = colors.cyan, bold = true }, + + -- equal to default: + + ["text"] = { font = font_name, size = font_size, fore = colors.black, back = colors.textpanel }, + ["text"] = { font = font_name, size = font_size, fore = colors.black }, + +} + +local properties = { + ["fold.by.parsing"] = 1, + ["fold.by.indentation"] = 0, + ["fold.by.line"] = 0, + ["fold.line.comments"] = 0, + -- + ["lexer.context.log"] = 1, -- log errors and warnings + ["lexer.context.trace"] = 0, -- show loading, initializations etc + ["lexer.context.detail"] = 0, -- show more detail when tracing + ["lexer.context.show"] = 0, -- show result of lexing + ["lexer.context.collapse"] = 0, -- make lexing results somewhat more efficient + ["lexer.context.inspect"] = 0, -- show some info about lexer (styles and so) + -- +-- ["lexer.context.log"] = 1, -- log errors and warnings +-- ["lexer.context.trace"] = 1, -- show loading, initializations etc +} + +local lexer = lexer or require("lexer") +local context = lexer.context + +if context then + context.inform("loading context (style) properties") + if context.registerstyles then + context.registerstyles(styles) + end + if context.registerproperties then + context.registerproperties(properties) + end +end + diff --git a/context/data/scite/context/scite-context-data-context.properties b/context/data/scite/context/scite-context-data-context.properties new file mode 100644 index 000000000..3e53862f7 --- /dev/null +++ b/context/data/scite/context/scite-context-data-context.properties @@ -0,0 +1,193 @@ +keywordclass.context.constants=\ +zerocount minusone minustwo plusone \ +plustwo plusthree plusfour plusfive plussix \ +plusseven pluseight plusnine plusten plussixteen \ +plushundred plusthousand plustenthousand plustwentythousand medcard \ +maxcard zeropoint onepoint halfapoint onebasepoint \ +maxdimen scaledpoint thousandpoint points halfpoint \ +zeroskip zeromuskip onemuskip pluscxxvii pluscxxviii \ +pluscclv pluscclvi normalpagebox endoflinetoken outputnewlinechar \ +emptytoks empty undefined voidbox emptybox \ +emptyvbox emptyhbox bigskipamount medskipamount smallskipamount \ +fmtname fmtversion texengine texenginename texengineversion \ +luatexengine pdftexengine xetexengine unknownengine etexversion \ +pdftexversion xetexversion xetexrevision activecatcode bgroup \ +egroup endline conditionaltrue conditionalfalse attributeunsetvalue \ +uprotationangle rightrotationangle downrotationangle leftrotationangle inicatcodes \ +ctxcatcodes texcatcodes notcatcodes txtcatcodes vrbcatcodes \ +prtcatcodes nilcatcodes luacatcodes tpacatcodes tpbcatcodes \ +xmlcatcodes ctdcatcodes escapecatcode begingroupcatcode endgroupcatcode \ +mathshiftcatcode alignmentcatcode endoflinecatcode parametercatcode superscriptcatcode \ +subscriptcatcode ignorecatcode spacecatcode lettercatcode othercatcode \ +activecatcode commentcatcode invalidcatcode tabasciicode newlineasciicode \ +formfeedasciicode endoflineasciicode endoffileasciicode spaceasciicode hashasciicode \ +dollarasciicode commentasciicode ampersandasciicode colonasciicode backslashasciicode \ +circumflexasciicode underscoreasciicode leftbraceasciicode barasciicode rightbraceasciicode \ +tildeasciicode delasciicode lessthanasciicode morethanasciicode doublecommentsignal \ +atsignasciicode exclamationmarkasciicode questionmarkasciicode doublequoteasciicode singlequoteasciicode \ +forwardslashasciicode primeasciicode activemathcharcode activetabtoken activeformfeedtoken \ +activeendoflinetoken batchmodecode nonstopmodecode scrollmodecode errorstopmodecode \ +bottomlevelgroupcode simplegroupcode hboxgroupcode adjustedhboxgroupcode vboxgroupcode \ +vtopgroupcode aligngroupcode noaligngroupcode outputgroupcode mathgroupcode \ +discretionarygroupcode insertgroupcode vcentergroupcode mathchoicegroupcode semisimplegroupcode \ +mathshiftgroupcode mathleftgroupcode vadjustgroupcode charnodecode hlistnodecode \ +vlistnodecode rulenodecode insertnodecode marknodecode adjustnodecode \ +ligaturenodecode discretionarynodecode whatsitnodecode mathnodecode gluenodecode \ +kernnodecode penaltynodecode unsetnodecode mathsnodecode charifcode \ +catifcode numifcode dimifcode oddifcode vmodeifcode \ +hmodeifcode mmodeifcode innerifcode voidifcode hboxifcode \ +vboxifcode xifcode eofifcode trueifcode falseifcode \ +caseifcode definedifcode csnameifcode fontcharifcode fontslantperpoint \ +fontinterwordspace fontinterwordstretch fontinterwordshrink fontexheight fontemwidth \ +fontextraspace slantperpoint interwordspace interwordstretch interwordshrink \ +exheight emwidth extraspace mathsupdisplay mathsupnormal \ +mathsupcramped mathsubnormal mathsubcombined mathaxisheight startmode \ +stopmode startnotmode stopnotmode startmodeset stopmodeset \ +doifmode doifmodeelse doifnotmode startmodeset stopmodeset \ +startallmodes stopallmodes startnotallmodes stopnotallmodes doifallmodes \ +doifallmodeselse doifnotallmodes startenvironment stopenvironment environment \ +startcomponent stopcomponent component startproduct stopproduct \ +product startproject stopproject project starttext \ +stoptext startnotext stopnotext startdocument stopdocument \ +documentvariable setupdocument startmodule stopmodule usemodule \ +usetexmodule useluamodule setupmodule currentmoduleparameter moduleparameter \ +startTEXpage stopTEXpage enablemode disablemode preventmode \ +globalenablemode globaldisablemode globalpreventmode pushmode popmode \ +typescriptone typescripttwo typescriptthree mathsizesuffix mathordcode \ +mathopcode mathbincode mathrelcode mathopencode mathclosecode \ +mathpunctcode mathalphacode mathinnercode mathnothingcode mathlimopcode \ +mathnolopcode mathboxcode mathchoicecode mathaccentcode mathradicalcode \ +constantnumber constantnumberargument constantdimen constantdimenargument constantemptyargument \ +continueifinputfile luastringsep !!bs !!es lefttorightmark \ +righttoleftmark breakablethinspace nobreakspace narrownobreakspace zerowidthnobreakspace \ +ideographicspace ideographichalffillspace twoperemspace threeperemspace fourperemspace \ +fiveperemspace sixperemspace figurespace punctuationspace hairspace \ +zerowidthspace zerowidthnonjoiner zerowidthjoiner zwnj zwj + +keywordclass.context.helpers=\ +startsetups stopsetups startxmlsetups stopxmlsetups \ +startluasetups stopluasetups starttexsetups stoptexsetups startrawsetups \ +stoprawsetups startlocalsetups stoplocalsetups starttexdefinition stoptexdefinition \ +starttexcode stoptexcode startcontextcode stopcontextcode startcontextdefinitioncode \ +stopcontextdefinitioncode doifsetupselse doifsetups doifnotsetups setup \ +setups texsetup xmlsetup luasetup directsetup \ +doifelsecommandhandler doifnotcommandhandler doifcommandhandler newmode setmode \ +resetmode newsystemmode setsystemmode resetsystemmode pushsystemmode \ +popsystemmode booleanmodevalue newcount newdimen newskip \ +newmuskip newbox newtoks newread newwrite \ +newmarks newinsert newattribute newif newlanguage \ +newfamily newfam newhelp then begcsname \ +strippedcsname firstargumentfalse firstargumenttrue secondargumentfalse secondargumenttrue \ +thirdargumentfalse thirdargumenttrue fourthargumentfalse fourthargumenttrue fifthargumentfalse \ +fifthsargumenttrue sixthargumentfalse sixtsargumenttrue doglobal dodoglobal \ +redoglobal resetglobal donothing dontcomplain forgetall \ +donetrue donefalse htdp unvoidbox hfilll \ +vfilll mathbox mathlimop mathnolop mathnothing \ +mathalpha currentcatcodetable defaultcatcodetable catcodetablename newcatcodetable \ +startcatcodetable stopcatcodetable startextendcatcodetable stopextendcatcodetable pushcatcodetable \ +popcatcodetable restorecatcodes setcatcodetable letcatcodecommand defcatcodecommand \ +uedcatcodecommand hglue vglue hfillneg vfillneg \ +hfilllneg vfilllneg ruledhss ruledhfil ruledhfill \ +ruledhfilneg ruledhfillneg normalhfillneg ruledvss ruledvfil \ +ruledvfill ruledvfilneg ruledvfillneg normalvfillneg ruledhbox \ +ruledvbox ruledvtop ruledvcenter ruledmbox ruledhskip \ +ruledvskip ruledkern ruledmskip ruledmkern ruledhglue \ +ruledvglue normalhglue normalvglue ruledpenalty filledhboxb \ +filledhboxr filledhboxg filledhboxc filledhboxm filledhboxy \ +filledhboxk scratchcounter globalscratchcounter scratchdimen globalscratchdimen \ +scratchskip globalscratchskip scratchmuskip globalscratchmuskip scratchtoks \ +globalscratchtoks scratchbox globalscratchbox normalbaselineskip normallineskip \ +normallineskiplimit availablehsize localhsize setlocalhsize nextbox \ +dowithnextbox dowithnextboxcs dowithnextboxcontent dowithnextboxcontentcs scratchwidth \ +scratchheight scratchdepth scratchoffset scratchdistance scratchhsize \ +scratchvsize scratchxoffset scratchyoffset scratchhoffset scratchvoffset \ +scratchxposition scratchyposition scratchtopoffset scratchbottomoffset scratchleftoffset \ +scratchrightoffset scratchcounterone scratchcountertwo scratchcounterthree scratchdimenone \ +scratchdimentwo scratchdimenthree scratchskipone scratchskiptwo scratchskipthree \ +scratchmuskipone scratchmuskiptwo scratchmuskipthree scratchtoksone scratchtokstwo \ +scratchtoksthree scratchboxone scratchboxtwo scratchboxthree scratchnx \ +scratchny scratchmx scratchmy scratchunicode scratchleftskip \ +scratchrightskip scratchtopskip scratchbottomskip doif doifnot \ +doifelse doifinset doifnotinset doifinsetelse doifnextcharelse \ +doifnextoptionalelse doifnextoptionalcselse doiffastoptionalcheckelse doifnextbgroupelse doifnextbgroupcselse \ +doifnextparenthesiselse doifundefinedelse doifdefinedelse doifundefined doifdefined \ +doifelsevalue doifvalue doifnotvalue doifnothing doifsomething \ +doifelsenothing doifsomethingelse doifvaluenothing doifvaluesomething doifelsevaluenothing \ +doifdimensionelse doifnumberelse doifnumber doifnotnumber doifcommonelse \ +doifcommon doifnotcommon doifinstring doifnotinstring doifinstringelse \ +doifassignmentelse docheckassignment tracingall tracingnone loggingall \ +removetoks appendtoks prependtoks appendtotoks prependtotoks \ +to endgraf endpar everyendpar reseteverypar \ +finishpar empty null space quad \ +enspace obeyspaces obeylines obeyedspace obeyedline \ +normalspace executeifdefined singleexpandafter doubleexpandafter tripleexpandafter \ +dontleavehmode removelastspace removeunwantedspaces keepunwantedspaces wait \ +writestatus define defineexpandable redefine setmeasure \ +setemeasure setgmeasure setxmeasure definemeasure freezemeasure \ +measure measured installcorenamespace getvalue getuvalue \ +setvalue setevalue setgvalue setxvalue letvalue \ +letgvalue resetvalue undefinevalue ignorevalue setuvalue \ +setuevalue setugvalue setuxvalue globallet glet \ +udef ugdef uedef uxdef checked \ +unique getparameters geteparameters getgparameters getxparameters \ +forgetparameters copyparameters getdummyparameters dummyparameter directdummyparameter \ +setdummyparameter letdummyparameter usedummystyleandcolor usedummystyleparameter usedummycolorparameter \ +processcommalist processcommacommand quitcommalist quitprevcommalist processaction \ +processallactions processfirstactioninset processallactionsinset unexpanded expanded \ +startexpanded stopexpanded protected protect unprotect \ +firstofoneargument firstoftwoarguments secondoftwoarguments firstofthreearguments secondofthreearguments \ +thirdofthreearguments firstoffourarguments secondoffourarguments thirdoffourarguments fourthoffourarguments \ +firstoffivearguments secondoffivearguments thirdoffivearguments fourthoffivearguments fifthoffivearguments \ +firstofsixarguments secondofsixarguments thirdofsixarguments fourthofsixarguments fifthofsixarguments \ +sixthofsixarguments firstofoneunexpanded gobbleoneargument gobbletwoarguments gobblethreearguments \ +gobblefourarguments gobblefivearguments gobblesixarguments gobblesevenarguments gobbleeightarguments \ +gobbleninearguments gobbletenarguments gobbleoneoptional gobbletwooptionals gobblethreeoptionals \ +gobblefouroptionals gobblefiveoptionals dorecurse doloop exitloop \ +dostepwiserecurse recurselevel recursedepth dofastloopcs dowith \ +newconstant setnewconstant setconstant setconstantvalue newconditional \ +settrue setfalse settruevalue setfalsevalue newmacro \ +setnewmacro newfraction newsignal dosingleempty dodoubleempty \ +dotripleempty doquadrupleempty doquintupleempty dosixtupleempty doseventupleempty \ +dosingleargument dodoubleargument dotripleargument doquadrupleargument doquintupleargument \ +dosixtupleargument doseventupleargument dosinglegroupempty dodoublegroupempty dotriplegroupempty \ +doquadruplegroupempty doquintuplegroupempty permitspacesbetweengroups dontpermitspacesbetweengroups nopdfcompression \ +maximumpdfcompression normalpdfcompression modulonumber dividenumber getfirstcharacter \ +doiffirstcharelse startnointerference stopnointerference twodigits threedigits \ +leftorright strut setstrut strutbox strutht \ +strutdp strutwd struthtdp begstrut endstrut \ +lineheight ordordspacing ordopspacing ordbinspacing ordrelspacing \ +ordopenspacing ordclosespacing ordpunctspacing ordinnerspacing opordspacing \ +opopspacing opbinspacing oprelspacing opopenspacing opclosespacing \ +oppunctspacing opinnerspacing binordspacing binopspacing binbinspacing \ +binrelspacing binopenspacing binclosespacing binpunctspacing bininnerspacing \ +relordspacing relopspacing relbinspacing relrelspacing relopenspacing \ +relclosespacing relpunctspacing relinnerspacing openordspacing openopspacing \ +openbinspacing openrelspacing openopenspacing openclosespacing openpunctspacing \ +openinnerspacing closeordspacing closeopspacing closebinspacing closerelspacing \ +closeopenspacing closeclosespacing closepunctspacing closeinnerspacing punctordspacing \ +punctopspacing punctbinspacing punctrelspacing punctopenspacing punctclosespacing \ +punctpunctspacing punctinnerspacing innerordspacing inneropspacing innerbinspacing \ +innerrelspacing inneropenspacing innerclosespacing innerpunctspacing innerinnerspacing \ +normalreqno startimath stopimath normalstartimath normalstopimath \ +startdmath stopdmath normalstartdmath normalstopdmath uncramped \ +cramped triggermathstyle mathstylefont mathsmallstylefont mathstyleface \ +mathsmallstyleface mathstylecommand mathpalette mathstylehbox mathstylevbox \ +mathstylevcenter mathstylevcenteredhbox mathstylevcenteredvbox mathtext setmathsmalltextbox \ +setmathtextbox triggerdisplaystyle triggertextstyle triggerscriptstyle triggerscriptscriptstyle \ +triggeruncrampedstyle triggercrampedstyle triggersmallstyle triggeruncrampedsmallstyle triggercrampedsmallstyle \ +triggerbigstyle triggeruncrampedbigstyle triggercrampedbigstyle luaexpr expdoifelse \ +expdoif expdoifnot expdoifcommonelse expdoifinsetelse ctxdirectlua \ +ctxlatelua ctxsprint ctxwrite ctxcommand ctxdirectcommand \ +ctxlatecommand ctxreport ctxlua luacode lateluacode \ +directluacode registerctxluafile ctxloadluafile luaversion luamajorversion \ +luaminorversion ctxluacode luaconditional luaexpanded startluaparameterset \ +stopluaparameterset luaparameterset definenamedlua obeylualines obeyluatokens \ +startluacode stopluacode startlua stoplua startctxfunction \ +stopctxfunction ctxfunction startctxfunctiondefinition stopctxfunctiondefinition carryoverpar \ +assumelongusagecs Umathbotaccent righttolefthbox lefttorighthbox righttoleftvbox \ +lefttorightvbox righttoleftvtop lefttorightvtop rtlhbox ltrhbox \ +rtlvbox ltrvbox rtlvtop ltrvtop autodirhbox \ +autodirvbox autodirvtop lefttoright righttoleft synchronizelayoutdirection \ +synchronizedisplaydirection synchronizeinlinedirection lesshyphens morehyphens nohyphens \ +dohyphens Ucheckedstartdisplaymath Ucheckedstopdisplaymath + diff --git a/context/data/scite/scite-context-data-interfaces.properties b/context/data/scite/context/scite-context-data-interfaces.properties index 9c2ca4623..9c2ca4623 100644 --- a/context/data/scite/scite-context-data-interfaces.properties +++ b/context/data/scite/context/scite-context-data-interfaces.properties diff --git a/context/data/scite/scite-context-data-metafun.properties b/context/data/scite/context/scite-context-data-metafun.properties index 9381b4f8d..9381b4f8d 100644 --- a/context/data/scite/scite-context-data-metafun.properties +++ b/context/data/scite/context/scite-context-data-metafun.properties diff --git a/context/data/scite/scite-context-data-metapost.properties b/context/data/scite/context/scite-context-data-metapost.properties index 88ace57ca..88ace57ca 100644 --- a/context/data/scite/scite-context-data-metapost.properties +++ b/context/data/scite/context/scite-context-data-metapost.properties diff --git a/context/data/scite/scite-context-data-tex.properties b/context/data/scite/context/scite-context-data-tex.properties index 195125433..d1780794d 100644 --- a/context/data/scite/scite-context-data-tex.properties +++ b/context/data/scite/context/scite-context-data-tex.properties @@ -50,10 +50,10 @@ attribute attributedef catcodetable clearmarks crampeddisplaystyle \ crampedscriptscriptstyle crampedscriptstyle crampedtextstyle fontid formatname \ gleaders ifabsdim ifabsnum ifprimitive initcatcodetable \ latelua luaescapestring luastartup luatexdatestamp luatexrevision \ -luatexversion mathstyle nokerns noligs outputbox \ -pageleftoffset pagetopoffset postexhyphenchar posthyphenchar preexhyphenchar \ -prehyphenchar primitive savecatcodetable scantextokens suppressfontnotfounderror \ -suppressifcsnameerror suppresslongerror suppressoutererror synctex +luatexversion luafunction mathstyle nokerns noligs \ +outputbox pageleftoffset pagetopoffset postexhyphenchar posthyphenchar \ +preexhyphenchar prehyphenchar primitive savecatcodetable scantextokens \ +suppressfontnotfounderror suppressifcsnameerror suppresslongerror suppressoutererror synctex keywordclass.tex.omega=\ OmegaVersion bodydir chardp charht \ @@ -124,114 +124,113 @@ attribute attributedef badness baselineskip batchmode \ begingroup belowdisplayshortskip belowdisplayskip binoppenalty bodydir \ botmark botmarks box boxdir boxmaxdepth \ brokenpenalty catcode catcodetable char chardef \ -chardp charht charit charwd cleaders \ -clearmarks closein closeout clubpenalties clubpenalty \ -copy count countdef cr crampeddisplaystyle \ -crampedscriptscriptstyle crampedscriptstyle crampedtextstyle crcr csname \ -currentgrouplevel currentgrouptype currentifbranch currentiflevel currentiftype \ -day deadcycles def defaulthyphenchar defaultskewchar \ -delcode delimiter delimiterfactor delimitershortfall detokenize \ -dimen dimendef dimexpr directlua discretionary \ -displayindent displaylimits displaystyle displaywidowpenalties displaywidowpenalty \ -displaywidth divide doublehyphendemerits dp dump \ -eTeXVersion eTeXminorversion eTeXrevision eTeXversion edef \ -efcode else emergencystretch end endcsname \ -endgroup endinput endlinechar eqno errhelp \ -errmessage errorcontextlines errorstopmode escapechar everycr \ -everydisplay everyeof everyhbox everyjob everymath \ -everypar everyvbox exhyphenchar exhyphenpenalty expandafter \ -expanded fam fi finalhyphendemerits firstmark \ -firstmarks floatingpenalty font fontchardp fontcharht \ -fontcharic fontcharwd fontdimen fontid fontname \ -formatname futurelet gdef gleaders global \ -globaldefs glueexpr glueshrink glueshrinkorder gluestretch \ -gluestretchorder gluetomu halign hangafter hangindent \ -hbadness hbox hfil hfill hfilneg \ -hfuzz hoffset holdinginserts hrule hsize \ -hskip hss ht hyphenation hyphenchar \ -hyphenpenalty if ifabsdim ifabsnum ifcase \ -ifcat ifcsname ifdefined ifdim ifeof \ -iffalse iffontchar ifhbox ifhmode ifincsname \ -ifinner ifmmode ifnum ifodd ifpdfabsdim \ -ifpdfabsnum ifpdfprimitive ifprimitive iftrue ifvbox \ -ifvmode ifvoid ifx ignorespaces immediate \ -indent initcatcodetable input inputlineno insert \ -insertpenalties interactionmode interlinepenalties interlinepenalty jobname \ -kern language lastbox lastkern lastlinefit \ -lastnodetype lastpenalty lastskip latelua lccode \ -leaders left leftghost lefthyphenmin leftmarginkern \ -leftskip leqno let letterspacefont limits \ -linepenalty lineskip lineskiplimit localbrokenpenalty localinterlinepenalty \ -localleftbox localrightbox long looseness lower \ -lowercase lpcode luaescapestring luastartup luatexdatestamp \ -luatexrevision luatexversion mag mark marks \ -mathaccent mathbin mathchar mathchardef mathchoice \ -mathclose mathcode mathdir mathinner mathop \ -mathopen mathord mathpunct mathrel mathstyle \ -mathsurround maxdeadcycles maxdepth meaning medmuskip \ -message middle mkern month moveleft \ -moveright mskip muexpr multiply muskip \ -muskipdef mutoglue newlinechar noalign noboundary \ -noexpand noindent nokerns noligs nolimits \ -nolocaldirs nolocalwhatsits nonscript nonstopmode nulldelimiterspace \ -nullfont number numexpr odelcode odelimiter \ -omathaccent omathchar omathchardef omathcode omit \ -openin openout or oradical outer \ -output outputbox outputpenalty over overfullrule \ -overline overwithdelims pagebottomoffset pagedepth pagedir \ -pagediscards pagefilllstretch pagefillstretch pagefilstretch pagegoal \ -pageheight pageleftoffset pagerightoffset pageshrink pagestretch \ -pagetopoffset pagetotal pagewidth par pardir \ -parfillskip parindent parshape parshapedimen parshapeindent \ -parshapelength parskip patterns pausing pdfadjustspacing \ -pdfannot pdfcatalog pdfcolorstack pdfcolorstackinit pdfcompresslevel \ -pdfcopyfont pdfcreationdate pdfdecimaldigits pdfdest pdfdestmargin \ -pdfdraftmode pdfeachlinedepth pdfeachlineheight pdfendlink pdfendthread \ -pdffirstlineheight pdffontattr pdffontexpand pdffontname pdffontobjnum \ -pdffontsize pdfgamma pdfgentounicode pdfglyphtounicode pdfhorigin \ -pdfignoreddimen pdfimageapplygamma pdfimagegamma pdfimagehicolor pdfimageresolution \ -pdfincludechars pdfinclusioncopyfonts pdfinclusionerrorlevel pdfinfo pdfinsertht \ -pdflastannot pdflastlinedepth pdflastlink pdflastobj pdflastxform \ -pdflastximage pdflastximagecolordepth pdflastximagepages pdflastxpos pdflastypos \ -pdflinkmargin pdfliteral pdfmapfile pdfmapline pdfminorversion \ -pdfnames pdfnoligatures pdfnormaldeviate pdfobj pdfobjcompresslevel \ -pdfoptionpdfminorversion pdfoutline pdfoutput pdfpageattr pdfpagebox \ -pdfpageheight pdfpageref pdfpageresources pdfpagesattr pdfpagewidth \ -pdfpkmode pdfpkresolution pdfprimitive pdfprotrudechars pdfpxdimen \ -pdfrandomseed pdfrefobj pdfrefxform pdfrefximage pdfreplacefont \ -pdfrestore pdfretval pdfsave pdfsavepos pdfsetmatrix \ -pdfsetrandomseed pdfstartlink pdfstartthread pdftexbanner pdftexrevision \ -pdftexversion pdfthread pdfthreadmargin pdftracingfonts pdftrailer \ -pdfuniformdeviate pdfuniqueresname pdfvorigin pdfxform pdfxformattr \ -pdfxformname pdfxformresources pdfximage pdfximagebbox penalty \ -postdisplaypenalty postexhyphenchar posthyphenchar predisplaydirection predisplaypenalty \ -predisplaysize preexhyphenchar prehyphenchar pretolerance prevdepth \ -prevgraf primitive protected quitvmode radical \ -raise read readline relax relpenalty \ -right rightghost righthyphenmin rightmarginkern rightskip \ -romannumeral rpcode savecatcodetable savinghyphcodes savingvdiscards \ -scantextokens scantokens scriptfont scriptscriptfont scriptscriptstyle \ -scriptspace scriptstyle scrollmode setbox setlanguage \ -sfcode shipout show showbox showboxbreadth \ -showboxdepth showgroups showifs showlists showthe \ -showtokens skewchar skip skipdef spacefactor \ -spaceskip span special splitbotmark splitbotmarks \ -splitdiscards splitfirstmark splitfirstmarks splitmaxdepth splittopskip \ -string suppressfontnotfounderror suppressifcsnameerror suppresslongerror suppressoutererror \ -synctex tabskip tagcode textdir textfont \ -textstyle the thickmuskip thinmuskip time \ -toks toksdef tolerance topmark topmarks \ -topskip tracingassigns tracingcommands tracinggroups tracingifs \ -tracinglostchars tracingmacros tracingnesting tracingonline tracingoutput \ -tracingpages tracingparagraphs tracingrestores tracingscantokens tracingstats \ -uccode uchyph underline unexpanded unhbox \ -unhcopy unkern unless unpenalty unskip \ -unvbox unvcopy uppercase vadjust valign \ -vbadness vbox vcenter vfil vfill \ -vfilneg vfuzz voffset vrule vsize \ -vskip vsplit vss vtop wd \ -widowpenalties widowpenalty write xdef xleaders \ -xspaceskip year +cleaders clearmarks closein closeout clubpenalties \ +clubpenalty copy count countdef cr \ +crampeddisplaystyle crampedscriptscriptstyle crampedscriptstyle crampedtextstyle crcr \ +csname currentgrouplevel currentgrouptype currentifbranch currentiflevel \ +currentiftype day deadcycles def defaulthyphenchar \ +defaultskewchar delcode delimiter delimiterfactor delimitershortfall \ +detokenize dimen dimendef dimexpr directlua \ +discretionary displayindent displaylimits displaystyle displaywidowpenalties \ +displaywidowpenalty displaywidth divide doublehyphendemerits dp \ +dump eTeXVersion eTeXminorversion eTeXrevision eTeXversion \ +edef efcode else emergencystretch end \ +endcsname endgroup endinput endlinechar eqno \ +errhelp errmessage errorcontextlines errorstopmode escapechar \ +everycr everydisplay everyeof everyhbox everyjob \ +everymath everypar everyvbox exhyphenchar exhyphenpenalty \ +expandafter expanded fam fi finalhyphendemerits \ +firstmark firstmarks floatingpenalty font fontchardp \ +fontcharht fontcharic fontcharwd fontdimen fontid \ +fontname formatname futurelet gdef gleaders \ +global globaldefs glueexpr glueshrink glueshrinkorder \ +gluestretch gluestretchorder gluetomu halign hangafter \ +hangindent hbadness hbox hfil hfill \ +hfilneg hfuzz hoffset holdinginserts hrule \ +hsize hskip hss ht hyphenation \ +hyphenchar hyphenpenalty if ifabsdim ifabsnum \ +ifcase ifcat ifcsname ifdefined ifdim \ +ifeof iffalse iffontchar ifhbox ifhmode \ +ifincsname ifinner ifmmode ifnum ifodd \ +ifpdfabsdim ifpdfabsnum ifpdfprimitive ifprimitive iftrue \ +ifvbox ifvmode ifvoid ifx ignorespaces \ +immediate indent initcatcodetable input inputlineno \ +insert insertpenalties interactionmode interlinepenalties interlinepenalty \ +jobname kern language lastbox lastkern \ +lastlinefit lastnodetype lastpenalty lastskip latelua \ +lccode leaders left leftghost lefthyphenmin \ +leftmarginkern leftskip leqno let letterspacefont \ +limits linepenalty lineskip lineskiplimit localbrokenpenalty \ +localinterlinepenalty localleftbox localrightbox long looseness \ +lower lowercase lpcode luaescapestring luastartup \ +luatexdatestamp luatexrevision luatexversion mag mark \ +marks mathaccent mathbin mathchar mathchardef \ +mathchoice mathclose mathcode mathdir mathinner \ +mathop mathopen mathord mathpunct mathrel \ +mathstyle mathsurround maxdeadcycles maxdepth meaning \ +medmuskip message middle mkern month \ +moveleft moveright mskip muexpr multiply \ +muskip muskipdef mutoglue newlinechar noalign \ +noboundary noexpand noindent nokerns noligs \ +nolimits nolocaldirs nolocalwhatsits nonscript nonstopmode \ +nulldelimiterspace nullfont number numexpr odelcode \ +odelimiter omathaccent omathchar omathchardef omathcode \ +omit openin openout or oradical \ +outer output outputbox outputpenalty over \ +overfullrule overline overwithdelims pagebottomoffset pagedepth \ +pagedir pagediscards pagefilllstretch pagefillstretch pagefilstretch \ +pagegoal pageheight pageleftoffset pagerightoffset pageshrink \ +pagestretch pagetopoffset pagetotal pagewidth par \ +pardir parfillskip parindent parshape parshapedimen \ +parshapeindent parshapelength parskip patterns pausing \ +pdfadjustspacing pdfannot pdfcatalog pdfcolorstack pdfcolorstackinit \ +pdfcompresslevel pdfcopyfont pdfcreationdate pdfdecimaldigits pdfdest \ +pdfdestmargin pdfdraftmode pdfeachlinedepth pdfeachlineheight pdfendlink \ +pdfendthread pdffirstlineheight pdffontattr pdffontexpand pdffontname \ +pdffontobjnum pdffontsize pdfgamma pdfgentounicode pdfglyphtounicode \ +pdfhorigin pdfignoreddimen pdfimageapplygamma pdfimagegamma pdfimagehicolor \ +pdfimageresolution pdfincludechars pdfinclusioncopyfonts pdfinclusionerrorlevel pdfinfo \ +pdfinsertht pdflastannot pdflastlinedepth pdflastlink pdflastobj \ +pdflastxform pdflastximage pdflastximagecolordepth pdflastximagepages pdflastxpos \ +pdflastypos pdflinkmargin pdfliteral pdfmapfile pdfmapline \ +pdfminorversion pdfnames pdfnoligatures pdfnormaldeviate pdfobj \ +pdfobjcompresslevel pdfoptionpdfminorversion pdfoutline pdfoutput pdfpageattr \ +pdfpagebox pdfpageheight pdfpageref pdfpageresources pdfpagesattr \ +pdfpagewidth pdfpkmode pdfpkresolution pdfprimitive pdfprotrudechars \ +pdfpxdimen pdfrandomseed pdfrefobj pdfrefxform pdfrefximage \ +pdfreplacefont pdfrestore pdfretval pdfsave pdfsavepos \ +pdfsetmatrix pdfsetrandomseed pdfstartlink pdfstartthread pdftexbanner \ +pdftexrevision pdftexversion pdfthread pdfthreadmargin pdftracingfonts \ +pdftrailer pdfuniformdeviate pdfuniqueresname pdfvorigin pdfxform \ +pdfxformattr pdfxformname pdfxformresources pdfximage pdfximagebbox \ +penalty postdisplaypenalty postexhyphenchar posthyphenchar predisplaydirection \ +predisplaypenalty predisplaysize preexhyphenchar prehyphenchar pretolerance \ +prevdepth prevgraf primitive protected quitvmode \ +radical raise read readline relax \ +relpenalty right rightghost righthyphenmin rightmarginkern \ +rightskip romannumeral rpcode savecatcodetable savinghyphcodes \ +savingvdiscards scantextokens scantokens scriptfont scriptscriptfont \ +scriptscriptstyle scriptspace scriptstyle scrollmode setbox \ +setlanguage sfcode shipout show showbox \ +showboxbreadth showboxdepth showgroups showifs showlists \ +showthe showtokens skewchar skip skipdef \ +spacefactor spaceskip span special splitbotmark \ +splitbotmarks splitdiscards splitfirstmark splitfirstmarks splitmaxdepth \ +splittopskip string suppressfontnotfounderror suppressifcsnameerror suppresslongerror \ +suppressoutererror synctex tabskip tagcode textdir \ +textfont textstyle the thickmuskip thinmuskip \ +time toks toksdef tolerance topmark \ +topmarks topskip tracingassigns tracingcommands tracinggroups \ +tracingifs tracinglostchars tracingmacros tracingnesting tracingonline \ +tracingoutput tracingpages tracingparagraphs tracingrestores tracingscantokens \ +tracingstats uccode uchyph underline unexpanded \ +unhbox unhcopy unkern unless unpenalty \ +unskip unvbox unvcopy uppercase vadjust \ +valign vbadness vbox vcenter vfil \ +vfill vfilneg vfuzz voffset vrule \ +vsize vskip vsplit vss vtop \ +wd widowpenalties widowpenalty write xdef \ +xleaders xspaceskip year keywordclass.tex.xetex=\ XeTeXversion diff --git a/context/data/scite/scite-context-external.properties b/context/data/scite/context/scite-context-external.properties index 5c7149341..c7d0c4a17 100644 --- a/context/data/scite/scite-context-external.properties +++ b/context/data/scite/context/scite-context-external.properties @@ -1,36 +1,46 @@ # external lpeg lexers -import $(SciteDefaultHome)/lexers/lpeg +lexer.lpeg.home=$(SciteDefaultHome)/context/lexers -lexer.lpeg.home=$(SciteDefaultHome)/lexers +lexer.lpeg.color.theme=scite-context-theme +# lexer.lpeg.color.theme=$(SciteDefaultHome)/context/lexers/themes/scite-context-theme.lua -# # pre 3.03: -# -#~ lexer.lpeg.script=$(lexer.lpeg.home)/scite-context-lexer.lua -# -# # post 3.03: -# -lexer.lpeg.script=$(lexer.lpeg.home)/lexer.lua -# -# where we load the extensions in the lexers themselves. - -lexer.lpeg.color.theme=$(lexer.lpeg.home)/themes/scite-context-theme.lua - -# alas, only a few properties are passed (only indentation) +# The lexer dll no longer interfaces to teh following properties. It never had a full +# interface, so maybe I'll make my own. fold.by.parsing=1 fold.by.indentation=0 fold.by.line=0 +fold.line.comments=0 + +# you can put the dll/so file in the <scitehome>/context/lexers path or keep it in +# <scitehome>/lexers if PLAT_WIN - lexerpath.*.lpeg=$(lexer.lpeg.home)/LexLPeg.dll + lexerpath.*.lpeg=$(lexer.lpeg.home)/../../lexers/lexlpeg.dll +# lexerpath.*.lpeg=$(lexer.lpeg.home)/lexers/lexlpeg.dll if PLAT_GTK - lexerpath.*.lpeg=$(lexer.lpeg.home)/liblexlpeg.so + lexerpath.*.lpeg=$(lexer.lpeg.home)/../../lexers/liblexlpeg.so +# lexerpath.*.lpeg=$(lexer.lpeg.home)/lexers/liblexlpeg.so + +# the variable lexer.name is automatically set but I'm not sure what the following +# one is supposed to do so we keep it around (sams as in lpeg.properties, which we +# don't load) lexer.*.lpeg=lpeg -file.patterns.cweb=*.h;*.c;*.w;*.hh;*.cc;*.ww;*.hpp;*.cpp;*.hxx;*.cxx; +# in principle you can do the following, as we're mostly compatible with the +# default lexers but for a regular context setup the lexers built-in scite are +# just fine so in principle we only need the dll/so +# +# import lexers/lpeg + +# patterns should be original (not clash with built in) + +file.patterns.cweb=*.w;*.ww; +file.patterns.cpp=*.h;*.c;*.hh;*.cc;*.hpp;*.cpp;*.hxx;*.cxx; +file.patterns.bib=*.bib lexer.$(file.patterns.metapost)=lpeg_scite-context-lexer-mps lexer.$(file.patterns.metafun)=lpeg_scite-context-lexer-mps @@ -40,18 +50,19 @@ lexer.$(file.patterns.example)=lpeg_scite-context-lexer-xml lexer.$(file.patterns.text)=lpeg_scite-context-lexer-txt lexer.$(file.patterns.pdf)=lpeg_scite-context-lexer-pdf lexer.$(file.patterns.cweb)=lpeg_scite-context-lexer-web +lexer.$(file.patterns.cpp)=lpeg_scite-context-lexer-cpp +lexer.$(file.patterns.bib)=lpeg_scite-context-lexer-bibtex lexer.$(file.patterns.tex)=lpeg_scite-context-lexer-tex lexer.$(file.patterns.xml)=lpeg_scite-context-lexer-xml lexer.$(file.patterns.html)=lpeg_scite-context-lexer-xml -lexer.$(file.patterns.cpp)=lpeg_scite-context-lexer-web # It's a real pitty that we cannot overload the errorlist lexer. That would # make scite even more interesting. Add to that including lpeg and the lpeg # lexer and thereby providing an interface to properties. -# lexer.errorlist=lpeg_scite-context-lexer-txt -# lexer.output=lpeg_scite-context-lexer-txt +#~ lexer.errorlist=lpeg_scite-context-lexer-txt +#~ lexer.output=lpeg_scite-context-lexer-txt comment.block.lpeg_scite-context-lexer-tex=% comment.block.at.line.start.lpeg_scite-context-lexer-tex=1 diff --git a/context/data/scite/scite-context-internal.properties b/context/data/scite/context/scite-context-internal.properties index 130e64f1e..038381dc7 100644 --- a/context/data/scite/scite-context-internal.properties +++ b/context/data/scite/context/scite-context-internal.properties @@ -8,8 +8,8 @@ # # % interface=none|metapost|mp|metafun -import scite-context-data-metapost -import scite-context-data-metafun +import context/scite-context-data-metapost +import context/scite-context-data-metafun keywordclass.metapost.all=$(keywordclass.metapost.tex) $(keywordclass.metapost.plain) $(keywordclass.metapost.primitives) keywordclass.metafun.all=$(keywordclass.metafun.constants) $(keywordclass.metafun.helpers) @@ -44,9 +44,9 @@ comment.block.at.line.start.metapost=1 # # % interface=all|nl|en|de|cz|it|ro|latex -import scite-context-data-tex -import scite-context-data-context -import scite-context-data-interfaces +import context/scite-context-data-tex +import context/scite-context-data-context +import context/scite-context-data-interfaces word.characters.$(file.patterns.context)=abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ@!?_\\ diff --git a/context/data/scite/context/scite-context-user.properties b/context/data/scite/context/scite-context-user.properties new file mode 100644 index 000000000..b6fc34282 --- /dev/null +++ b/context/data/scite/context/scite-context-user.properties @@ -0,0 +1,15 @@ +# this loades the basics + +import context/scite-context + +# internal lexing + +import context/scite-context-internal + +# external lexing (tex, mps, cld/lua, xml) + +import context/scite-context-external + +# this does some tuning + +import context/scite-pragma diff --git a/context/data/scite/scite-context.properties b/context/data/scite/context/scite-context.properties index bc1af717c..78850ef0d 100644 --- a/context/data/scite/scite-context.properties +++ b/context/data/scite/context/scite-context.properties @@ -66,7 +66,7 @@ open.suffix.$(file.patterns.context)=.tex # Example : patterns file.patterns.xml= -file.patterns.example=*.xml;*.xsl;*.xsd;*.fo;*.exa;*.rlb;*.rlg;*.rlv;*.rng;*.xfdf;*.xslt;*.dtd;*.lmx;*.htm;*.html;*.xhtml*.ctx;*.export; +file.patterns.example=*.xml;*.xsl;*.xsd;*.fo;*.exa;*.rlb;*.rlg;*.rlv;*.rng;*.xfdf;*.xslt;*.dtd;*.lmx;*.htm;*.html;*.xhtml;*.ctx;*.export;*.svg;*.xul open.suffix.$(file.patterns.example)=.xml filter.example=eXaMpLe|$(file.patterns.example)| #~ lexer.$(file.patterns.example)=xml @@ -160,7 +160,7 @@ xml.auto.close.tags=1 # extensions -import scite-ctx +import context/scite-ctx # hard coded compile / build / go @@ -229,14 +229,14 @@ command.groupundo.29.*=yes command.save.before.29.*=2 command.shortcut.29.*=Alt+F12 -command.name.30.*=Run with jit -command.subsystem.30.*=1 -command.30.$(file.patterns.context)=$(name.context.runjit) $(FileNameExt) -command.30.$(file.patterns.metafun)=$(name.context.runjit) $(FileNameExt) --metapost -command.30.$(file.patterns.exmaple)=$(name.context.runjit) $(FileNameExt) --xml -command.groupundo.30.*=yes -command.save.before.30.*=2 -command.shortcut.30.*=Alt+F7 +#~ command.name.30.*=Run with jit +#~ command.subsystem.30.*=1 +#~ command.30.$(file.patterns.context)=$(name.context.runjit) $(FileNameExt) +#~ command.30.$(file.patterns.metafun)=$(name.context.runjit) $(FileNameExt) --metapost +#~ command.30.$(file.patterns.exmaple)=$(name.context.runjit) $(FileNameExt) --xml +#~ command.groupundo.30.*=yes +#~ command.save.before.30.*=2 +#~ command.shortcut.30.*=Alt+F7 # 2 : pdf viewing diff --git a/context/data/scite/scite-ctx-context.properties b/context/data/scite/context/scite-ctx-context.properties index a1d5800e6..a1d5800e6 100644 --- a/context/data/scite/scite-ctx-context.properties +++ b/context/data/scite/context/scite-ctx-context.properties diff --git a/context/data/scite/scite-ctx-example.properties b/context/data/scite/context/scite-ctx-example.properties index 78b2f2859..78b2f2859 100644 --- a/context/data/scite/scite-ctx-example.properties +++ b/context/data/scite/context/scite-ctx-example.properties diff --git a/context/data/scite/scite-ctx.lua b/context/data/scite/context/scite-ctx.lua index 421e9cd89..24f5b34b8 100644 --- a/context/data/scite/scite-ctx.lua +++ b/context/data/scite/context/scite-ctx.lua @@ -1383,3 +1383,13 @@ function toggle_strip(name) OnStrip = ignore_strip end end + +-- this way we get proper lexing for lexers that do more extensive +-- parsing + +function OnOpen(filename) + -- print("opening: " .. filename .. " (size: " .. editor.TextLength .. ")") + editor:Colourise(1,editor.TextLength) +end + +-- output.LexerLanguage = "" diff --git a/context/data/scite/scite-ctx.properties b/context/data/scite/context/scite-ctx.properties index acbb33c0b..874a381e3 100644 --- a/context/data/scite/scite-ctx.properties +++ b/context/data/scite/context/scite-ctx.properties @@ -12,7 +12,7 @@ # <?xml version='1.0' language='uk' ?> ext.lua.auto.reload=1 -ext.lua.startup.script=$(SciteDefaultHome)/scite-ctx.lua +ext.lua.startup.script=$(SciteDefaultHome)/context/scite-ctx.lua #~ extension.$(file.patterns.context)=scite-ctx.lua #~ extension.$(file.patterns.example)=scite-ctx.lua @@ -150,8 +150,8 @@ command.save.before.26.*=2 command.groupundo.26.*=yes command.shortcut.26.*=Ctrl+E -import scite-ctx-context -import scite-ctx-example +import context/scite-ctx-context +import context/scite-ctx-example ctx.template.scan=yes ctx.template.rescan=no diff --git a/context/data/scite/scite-metapost.properties b/context/data/scite/context/scite-metapost.properties index e3ac25244..fc06dcaa2 100644 --- a/context/data/scite/scite-metapost.properties +++ b/context/data/scite/context/scite-metapost.properties @@ -69,7 +69,7 @@ lexer.metapost.comment.process=0 # Metapost: keywords -import scite-context-data-metapost.properties +import context/scite-context-data-metapost.properties keywords.$(file.patterns.metapost)=$(keywordclass.metapost.all) diff --git a/context/data/scite/scite-pragma.properties b/context/data/scite/context/scite-pragma.properties index 7308f1fb6..2dea18bad 100644 --- a/context/data/scite/scite-pragma.properties +++ b/context/data/scite/context/scite-pragma.properties @@ -25,7 +25,9 @@ $(filter.metafun)\ $(filter.example)\ $(filter.lua)\ $(filter.text)\ -$(filter.pdf) +$(filter.pdf)\ +$(filter.cweb)\ +$(filter.txt) # Editor: menus @@ -36,5 +38,4 @@ XML|xml||\ Lua|lua||\ Text|txt||\ PDF|pdf||\ -CWeb|web||\ -Text|txt|| +CWeb|cweb|| diff --git a/context/data/scite/scite-tex.properties b/context/data/scite/context/scite-tex.properties index 6933971e2..7d271eaf1 100644 --- a/context/data/scite/scite-tex.properties +++ b/context/data/scite/context/scite-tex.properties @@ -89,7 +89,7 @@ lexer.tex.auto.if=1 # only the macros that make sense: -import scite-context-data-tex.properties +import context/scite-context-data-tex.properties # collections diff --git a/context/data/scite/lexers/archive/scite-context-lexer-pre-3-3-1.lua b/context/data/scite/lexers/archive/scite-context-lexer-pre-3-3-1.lua deleted file mode 100644 index 7883177b4..000000000 --- a/context/data/scite/lexers/archive/scite-context-lexer-pre-3-3-1.lua +++ /dev/null @@ -1,1100 +0,0 @@ -local info = { - version = 1.324, - comment = "basics for scintilla lpeg lexer for context/metafun", - author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", - copyright = "PRAGMA ADE / ConTeXt Development Team", - license = "see context related readme files", - comment = "contains copyrighted code from mitchell.att.foicica.com", - -} - --- todo: move all code here --- todo: explore adapted dll ... properties + init - --- The fold and lex functions are copied and patched from original code by Mitchell (see --- lexer.lua). All errors are mine. --- --- Starting with SciTE version 3.20 there is an issue with coloring. As we still lack --- a connection with scite itself (properties as well as printing to the log pane) we --- cannot trace this (on windows). As far as I can see, there are no fundamental --- changes in lexer.lua or LexLPeg.cxx so it must be in scintilla itself. So for the --- moment I stick to 3.10. Indicators are: no lexing of 'next' and 'goto <label>' in the --- Lua lexer and no brace highlighting either. Interesting is that it does work ok in --- the cld lexer (so the Lua code is okay). Also the fact that char-def.lua lexes fast --- is a signal that the lexer quits somewhere halfway. --- --- After checking 3.24 and adapting to the new lexer tables things are okay again. So, --- this version assumes 3.24 or higher. In 3.24 we have a different token result, i.e. no --- longer a { tag, pattern } but just two return values. I didn't check other changes but --- will do that when I run into issues. --- --- I've considered making a whole copy and patch the other functions too as we need --- an extra nesting model. However, I don't want to maintain too much. An unfortunate --- change in 3.03 is that no longer a script can be specified. This means that instead --- of loading the extensions via the properties file, we now need to load them in our --- own lexers, unless of course we replace lexer.lua completely (which adds another --- installation issue). --- --- Another change has been that _LEXERHOME is no longer available. It looks like more and --- more functionality gets dropped so maybe at some point we need to ship our own dll/so --- files. For instance, I'd like to have access to the current filename and other scite --- properties. For instance, we could cache some info with each file, if only we had --- knowledge of what file we're dealing with. --- --- For huge files folding can be pretty slow and I do have some large ones that I keep --- open all the time. Loading is normally no ussue, unless one has remembered the status --- and the cursor is at the last line of a 200K line file. Optimizing the fold function --- brought down loading of char-def.lua from 14 sec => 8 sec. Replacing the word_match --- function and optimizing the lex function gained another 2+ seconds. A 6 second load --- is quite ok for me. The changed lexer table structure (no subtables) brings loading --- down to a few seconds. --- --- When the lexer path is copied to the textadept lexer path, and the theme definition to --- theme path (as lexer.lua), the lexer works there as well. When I have time and motive --- I will make a proper setup file to tune the look and feel a bit and associate suffixes --- with the context lexer. The textadept editor has a nice style tracing option but lacks --- the tabs for selecting files that scite has. It also has no integrated run that pipes --- to the log pane (I wonder if it could borrow code from the console2 project). Interesting --- is that the jit version of textadept crashes on lexing large files (and does not feel --- faster either). --- --- Function load(lexer_name) starts with _M.WHITESPACE = lexer_name..'_whitespace' which --- means that we need to have it frozen at the moment we load another lexer. Because spacing --- is used to revert to a parent lexer we need to make sure that we load children as late --- as possible in order not to get the wrong whitespace trigger. This took me quite a while --- to figure out (not being that familiar with the internals). The lex and fold functions --- have been optimized. It is a pitty that there is no proper print available. Another thing --- needed is a default style in ourown theme style definition, as otherwise we get wrong --- nested lexers, especially if they are larger than a view. This is the hardest part of --- getting things right. --- --- Eventually it might be safer to copy the other methods from lexer.lua here as well so --- that we have no dependencies, apart from the c library (for which at some point the api --- will be stable I hope). --- --- It's a pitty that there is no scintillua library for the OSX version of scite. Even --- better would be to have the scintillua library as integral part of scite as that way I --- could use OSX alongside windows and linux (depending on needs). Also nice would be to --- have a proper interface to scite then because currently the lexer is rather isolated and the --- lua version does not provide all standard libraries. It would also be good to have lpeg --- support in the regular scite lua extension (currently you need to pick it up from someplace --- else). - -local lpeg = require 'lpeg' - -local R, P, S, C, V, Cp, Cs, Ct, Cmt, Cc, Cf, Cg, Carg = lpeg.R, lpeg.P, lpeg.S, lpeg.C, lpeg.V, lpeg.Cp, lpeg.Cs, lpeg.Ct, lpeg.Cmt, lpeg.Cc, lpeg.Cf, lpeg.Cg, lpeg.Carg -local lpegmatch = lpeg.match -local find, gmatch, match, lower, upper, gsub = string.find, string.gmatch, string.match, string.lower, string.upper, string.gsub -local concat = table.concat -local global = _G -local type, next, setmetatable, rawset = type, next, setmetatable, rawset - -if lexer then - -- in recent c++ code the lexername and loading is hard coded -elseif _LEXERHOME then - dofile(_LEXERHOME .. '/lexer.lua') -- pre 3.03 situation -else - dofile('lexer.lua') -- whatever -end - -lexer.context = lexer.context or { } -local context = lexer.context - -context.patterns = context.patterns or { } -local patterns = context.patterns - -lexer._CONTEXTEXTENSIONS = true - -local locations = { - -- lexer.context.path, - "data", -- optional data directory - "..", -- regular scite directory -} - -local function collect(name) --- local definitions = loadfile(name .. ".luc") or loadfile(name .. ".lua") - local okay, definitions = pcall(function () return require(name) end) - if okay then - if type(definitions) == "function" then - definitions = definitions() - end - if type(definitions) == "table" then - return definitions - end - end -end - -function context.loaddefinitions(name) - for i=1,#locations do - local data = collect(locations[i] .. "/" .. name) - if data then - return data - end - end -end - --- maybe more efficient: - -function context.word_match(words,word_chars,case_insensitive) - local chars = '%w_' -- maybe just "" when word_chars - if word_chars then - chars = '^([' .. chars .. gsub(word_chars,'([%^%]%-])', '%%%1') ..']+)' - else - chars = '^([' .. chars ..']+)' - end - if case_insensitive then - local word_list = { } - for i=1,#words do - word_list[lower(words[i])] = true - end - return P(function(input, index) - local s, e, word = find(input,chars,index) - return word and word_list[lower(word)] and e + 1 or nil - end) - else - local word_list = { } - for i=1,#words do - word_list[words[i]] = true - end - return P(function(input, index) - local s, e, word = find(input,chars,index) - return word and word_list[word] and e + 1 or nil - end) - end -end - -local idtoken = R("az","AZ","\127\255","__") -local digit = R("09") -local sign = S("+-") -local period = P(".") -local space = S(" \n\r\t\f\v") - -patterns.idtoken = idtoken - -patterns.digit = digit -patterns.sign = sign -patterns.period = period - -patterns.cardinal = digit^1 -patterns.integer = sign^-1 * digit^1 - -patterns.real = - sign^-1 * ( -- at most one - digit^1 * period * digit^0 -- 10.0 10. - + digit^0 * period * digit^1 -- 0.10 .10 - + digit^1 -- 10 - ) - -patterns.restofline = (1-S("\n\r"))^1 -patterns.space = space -patterns.spacing = space^1 -patterns.nospacing = (1-space)^1 -patterns.anything = P(1) - -local endof = S("\n\r\f") - -patterns.startofline = P(function(input,index) - return (index == 1 or lpegmatch(endof,input,index-1)) and index -end) - -function context.exact_match(words,word_chars,case_insensitive) - local characters = concat(words) - local pattern -- the concat catches _ etc - if word_chars == true or word_chars == false or word_chars == nil then - word_chars = "" - end - if type(word_chars) == "string" then - pattern = S(characters) + idtoken - if case_insensitive then - pattern = pattern + S(upper(characters)) + S(lower(characters)) - end - if word_chars ~= "" then - pattern = pattern + S(word_chars) - end - elseif word_chars then - pattern = word_chars - end - if case_insensitive then - local list = { } - for i=1,#words do - list[lower(words[i])] = true - end - return Cmt(pattern^1, function(_,i,s) - return list[lower(s)] -- and i or nil - end) - else - local list = { } - for i=1,#words do - list[words[i]] = true - end - return Cmt(pattern^1, function(_,i,s) - return list[s] -- and i or nil - end) - end -end - --- spell checking (we can only load lua files) --- --- return { --- min = 3, --- max = 40, --- n = 12345, --- words = { --- ["someword"] = "someword", --- ["anotherword"] = "Anotherword", --- }, --- } - -local lists = { } - -function context.setwordlist(tag,limit) -- returns hash (lowercase keys and original values) - if not tag or tag == "" then - return false, 3 - end - local list = lists[tag] - if not list then - list = context.loaddefinitions("spell-" .. tag) - if not list or type(list) ~= "table" then - list = { words = false, min = 3 } - else - list.words = list.words or false - list.min = list.min or 3 - end - lists[tag] = list - end - return list.words, list.min -end - -patterns.wordtoken = R("az","AZ","\127\255") -patterns.wordpattern = patterns.wordtoken^3 -- todo: if limit and #s < limit then - --- -- pre 3.24: --- --- function context.checkedword(validwords,validminimum,s,i) -- ,limit --- if not validwords then -- or #s < validminimum then --- return true, { "text", i } -- { "default", i } --- else --- -- keys are lower --- local word = validwords[s] --- if word == s then --- return true, { "okay", i } -- exact match --- elseif word then --- return true, { "warning", i } -- case issue --- else --- local word = validwords[lower(s)] --- if word == s then --- return true, { "okay", i } -- exact match --- elseif word then --- return true, { "warning", i } -- case issue --- elseif upper(s) == s then --- return true, { "warning", i } -- probably a logo or acronym --- else --- return true, { "error", i } --- end --- end --- end --- end - -function context.checkedword(validwords,validminimum,s,i) -- ,limit - if not validwords then -- or #s < validminimum then - return true, "text", i -- { "default", i } - else - -- keys are lower - local word = validwords[s] - if word == s then - return true, "okay", i -- exact match - elseif word then - return true, "warning", i -- case issue - else - local word = validwords[lower(s)] - if word == s then - return true, "okay", i -- exact match - elseif word then - return true, "warning", i -- case issue - elseif upper(s) == s then - return true, "warning", i -- probably a logo or acronym - else - return true, "error", i - end - end - end -end - -function context.styleofword(validwords,validminimum,s) -- ,limit - if not validwords or #s < validminimum then - return "text" - else - -- keys are lower - local word = validwords[s] - if word == s then - return "okay" -- exact match - elseif word then - return "warning" -- case issue - else - local word = validwords[lower(s)] - if word == s then - return "okay" -- exact match - elseif word then - return "warning" -- case issue - elseif upper(s) == s then - return "warning" -- probably a logo or acronym - else - return "error" - end - end - end -end - --- overloaded functions - -local FOLD_BASE = SC_FOLDLEVELBASE -local FOLD_HEADER = SC_FOLDLEVELHEADERFLAG -local FOLD_BLANK = SC_FOLDLEVELWHITEFLAG - -local get_style_at = GetStyleAt -local get_property = GetProperty -local get_indent_amount = GetIndentAmount - -local h_table, b_table, n_table = { }, { }, { } - -setmetatable(h_table, { __index = function(t,level) local v = { level, FOLD_HEADER } t[level] = v return v end }) -setmetatable(b_table, { __index = function(t,level) local v = { level, FOLD_BLANK } t[level] = v return v end }) -setmetatable(n_table, { __index = function(t,level) local v = { level } t[level] = v return v end }) - --- -- todo: move the local functions outside (see below) .. old variant < 3.24 --- --- local newline = P("\r\n") + S("\r\n") --- local p_yes = Cp() * Cs((1-newline)^1) * newline^-1 --- local p_nop = newline --- --- local function fold_by_parsing(text,start_pos,start_line,start_level,lexer) --- local foldsymbols = lexer._foldsymbols --- if not foldsymbols then --- return { } --- end --- local patterns = foldsymbols._patterns --- if not patterns then --- return { } --- end --- local nofpatterns = #patterns --- if nofpatterns == 0 then --- return { } --- end --- local folds = { } --- local line_num = start_line --- local prev_level = start_level --- local current_level = prev_level --- local validmatches = foldsymbols._validmatches --- if not validmatches then --- validmatches = { } --- for symbol, matches in next, foldsymbols do -- whatever = { start = 1, stop = -1 } --- if not find(symbol,"^_") then -- brrr --- for s, _ in next, matches do --- validmatches[s] = true --- end --- end --- end --- foldsymbols._validmatches = validmatches --- end --- -- of course we could instead build a nice lpeg checker .. something for --- -- a rainy day with a stack of new cd's at hand --- local function action_y(pos,line) --- for i=1,nofpatterns do --- for s, m in gmatch(line,patterns[i]) do --- if validmatches[m] then --- local symbols = foldsymbols[get_style_at(start_pos + pos + s - 1)] --- if symbols then --- local action = symbols[m] --- if action then --- if type(action) == 'number' then -- we could store this in validmatches if there was only one symbol category --- current_level = current_level + action --- else --- current_level = current_level + action(text,pos,line,s,m) --- end --- if current_level < FOLD_BASE then --- current_level = FOLD_BASE --- end --- end --- end --- end --- end --- end --- if current_level > prev_level then --- folds[line_num] = h_table[prev_level] -- { prev_level, FOLD_HEADER } --- else --- folds[line_num] = n_table[prev_level] -- { prev_level } --- end --- prev_level = current_level --- line_num = line_num + 1 --- end --- local function action_n() --- folds[line_num] = b_table[prev_level] -- { prev_level, FOLD_BLANK } --- line_num = line_num + 1 --- end --- if lexer._reset_parser then --- lexer._reset_parser() --- end --- local lpegpattern = (p_yes/action_y + p_nop/action_n)^0 -- not too efficient but indirect function calls are neither but --- lpegmatch(lpegpattern,text) -- keys are not pressed that fast ... large files are slow anyway --- return folds --- end - --- The 3.24 variant; no longer subtable optimization is needed: - -local newline = P("\r\n") + S("\r\n") -local p_yes = Cp() * Cs((1-newline)^1) * newline^-1 -local p_nop = newline - -local folders = { } - -local function fold_by_parsing(text,start_pos,start_line,start_level,lexer) - local folder = folders[lexer] - if not folder then - -- - local pattern, folds, text, start_pos, line_num, prev_level, current_level - -- - local fold_symbols = lexer._foldsymbols - local fold_pattern = lexer._foldpattern -- use lpeg instead (context extension) - -- - if fold_pattern then - -- if no functions are found then we could have a faster one - - -- fold_pattern = Cp() * C(fold_pattern) * Carg(1) / function(s,match,pos) - -- local symbols = fold_symbols[get_style_at(start_pos + pos + s - 1)] - -- local l = symbols and symbols[match] - -- if l then - -- local t = type(l) - -- if t == 'number' then - -- current_level = current_level + l - -- elseif t == 'function' then - -- current_level = current_level + l(text, pos, line, s, match) - -- end - -- end - -- end - -- fold_pattern = (fold_pattern + P(1))^0 - -- local action_y = function(pos,line) - -- lpegmatch(fold_pattern,line,1,pos) - -- folds[line_num] = prev_level - -- if current_level > prev_level then - -- folds[line_num] = prev_level + FOLD_HEADER - -- end - -- if current_level < FOLD_BASE then - -- current_level = FOLD_BASE - -- end - -- prev_level = current_level - -- line_num = line_num + 1 - -- end - -- local action_n = function() - -- folds[line_num] = prev_level + FOLD_BLANK - -- line_num = line_num + 1 - -- end - -- pattern = (p_yes/action_y + p_nop/action_n)^0 - - fold_pattern = Cp() * C(fold_pattern) / function(s,match) - local symbols = fold_symbols[get_style_at(start_pos + s)] - if symbols then - local l = symbols[match] - if l then - current_level = current_level + l - end - end - end - local action_y = function() - folds[line_num] = prev_level - if current_level > prev_level then - folds[line_num] = prev_level + FOLD_HEADER - end - if current_level < FOLD_BASE then - current_level = FOLD_BASE - end - prev_level = current_level - line_num = line_num + 1 - end - local action_n = function() - folds[line_num] = prev_level + FOLD_BLANK - line_num = line_num + 1 - end - pattern = ((fold_pattern + (1-newline))^1 * newline / action_y + newline/action_n)^0 - - else - -- the traditional one but a bit optimized - local fold_symbols_patterns = fold_symbols._patterns - local action_y = function(pos,line) - for j = 1, #fold_symbols_patterns do - for s, match in gmatch(line,fold_symbols_patterns[j]) do -- '()('..patterns[i]..')' - local symbols = fold_symbols[get_style_at(start_pos + pos + s - 1)] - local l = symbols and symbols[match] - local t = type(l) - if t == 'number' then - current_level = current_level + l - elseif t == 'function' then - current_level = current_level + l(text, pos, line, s, match) - end - end - end - folds[line_num] = prev_level - if current_level > prev_level then - folds[line_num] = prev_level + FOLD_HEADER - end - if current_level < FOLD_BASE then - current_level = FOLD_BASE - end - prev_level = current_level - line_num = line_num + 1 - end - local action_n = function() - folds[line_num] = prev_level + FOLD_BLANK - line_num = line_num + 1 - end - pattern = (p_yes/action_y + p_nop/action_n)^0 - end - -- - local reset_parser = lexer._reset_parser - -- - folder = function(_text_,_start_pos_,_start_line_,_start_level_) - if reset_parser then - reset_parser() - end - folds = { } - text = _text_ - start_pos = _start_pos_ - line_num = _start_line_ - prev_level = _start_level_ - current_level = prev_level - lpegmatch(pattern,text) --- return folds -local t = folds -folds = nil -return t -- so folds can be collected - end - folders[lexer] = folder - end - return folder(text,start_pos,start_line,start_level,lexer) -end - --- local function fold_by_indentation(text,start_pos,start_line,start_level) --- local folds = { } --- local current_line = start_line --- local prev_level = start_level --- for line in gmatch(text,'[\t ]*(.-)\r?\n') do --- if line ~= "" then --- local current_level = FOLD_BASE + get_indent_amount(current_line) --- if current_level > prev_level then -- next level --- local i = current_line - 1 --- while true do --- local f = folds[i] --- if f and f[2] == FOLD_BLANK then --- i = i - 1 --- else --- break --- end --- end --- local f = folds[i] --- if f then --- f[2] = FOLD_HEADER --- end -- low indent --- folds[current_line] = n_table[current_level] -- { current_level } -- high indent --- elseif current_level < prev_level then -- prev level --- local f = folds[current_line - 1] --- if f then --- f[1] = prev_level -- high indent --- end --- folds[current_line] = n_table[current_level] -- { current_level } -- low indent --- else -- same level --- folds[current_line] = n_table[prev_level] -- { prev_level } --- end --- prev_level = current_level --- else --- folds[current_line] = b_table[prev_level] -- { prev_level, FOLD_BLANK } --- end --- current_line = current_line + 1 --- end --- return folds --- end - --- local function fold_by_indentation(text,start_pos,start_line,start_level) --- local folds = { } --- local current_line = start_line --- local prev_level = start_level --- for line in gmatch(text,'[\t ]*(.-)\r?\n') do --- if line ~= '' then --- local current_level = FOLD_BASE + get_indent_amount(current_line) --- if current_level > prev_level then -- next level --- local i = current_line - 1 --- local f --- while true do --- f = folds[i] --- if not f then --- break --- elseif f[2] == FOLD_BLANK then --- i = i - 1 --- else --- f[2] = FOLD_HEADER -- low indent --- break --- end --- end --- folds[current_line] = { current_level } -- high indent --- elseif current_level < prev_level then -- prev level --- local f = folds[current_line - 1] --- if f then --- f[1] = prev_level -- high indent --- end --- folds[current_line] = { current_level } -- low indent --- else -- same level --- folds[current_line] = { prev_level } --- end --- prev_level = current_level --- else --- folds[current_line] = { prev_level, FOLD_BLANK } --- end --- current_line = current_line + 1 --- end --- for line, level in next, folds do --- folds[line] = level[1] + (level[2] or 0) --- end --- return folds --- end - -local folds, current_line, prev_level - -local function action_y() - local current_level = FOLD_BASE + get_indent_amount(current_line) - if current_level > prev_level then -- next level - local i = current_line - 1 - local f - while true do - f = folds[i] - if not f then - break - elseif f[2] == FOLD_BLANK then - i = i - 1 - else - f[2] = FOLD_HEADER -- low indent - break - end - end - folds[current_line] = { current_level } -- high indent - elseif current_level < prev_level then -- prev level - local f = folds[current_line - 1] - if f then - f[1] = prev_level -- high indent - end - folds[current_line] = { current_level } -- low indent - else -- same level - folds[current_line] = { prev_level } - end - prev_level = current_level - current_line = current_line + 1 -end - -local function action_n() - folds[current_line] = { prev_level, FOLD_BLANK } - current_line = current_line + 1 -end - -local pattern = ( S("\t ")^0 * ( (1-S("\n\r"))^1 / action_y + P(true) / action_n) * newline )^0 - -local function fold_by_indentation(text,start_pos,start_line,start_level) - -- initialize - folds = { } - current_line = start_line - prev_level = start_level - -- define - -- -- not here .. pattern binds and local functions are not frozen - -- analyze - lpegmatch(pattern,text) - -- flatten - for line, level in next, folds do - folds[line] = level[1] + (level[2] or 0) - end - -- done --- return folds -local t = folds -folds = nil -return t -- so folds can be collected -end - -local function fold_by_line(text,start_pos,start_line,start_level) - local folds = { } - -- can also be lpeg'd - for _ in gmatch(text,".-\r?\n") do - folds[start_line] = n_table[start_level] -- { start_level } - start_line = start_line + 1 - end - return folds -end - -local threshold_by_lexer = 512 * 1024 -- we don't know the filesize yet -local threshold_by_parsing = 512 * 1024 -- we don't know the filesize yet -local threshold_by_indentation = 512 * 1024 -- we don't know the filesize yet -local threshold_by_line = 512 * 1024 -- we don't know the filesize yet - -function context.fold(text,start_pos,start_line,start_level) -- hm, we had size thresholds .. where did they go - if text == '' then - return { } - end - local lexer = global._LEXER - local fold_by_lexer = lexer._fold - local fold_by_symbols = lexer._foldsymbols - local filesize = 0 -- we don't know that - if fold_by_lexer then - if filesize <= threshold_by_lexer then - return fold_by_lexer(text,start_pos,start_line,start_level,lexer) - end - elseif fold_by_symbols then -- and get_property('fold.by.parsing',1) > 0 then - if filesize <= threshold_by_parsing then - return fold_by_parsing(text,start_pos,start_line,start_level,lexer) - end - elseif get_property('fold.by.indentation',1) > 0 then - if filesize <= threshold_by_indentation then - return fold_by_indentation(text,start_pos,start_line,start_level,lexer) - end - elseif get_property('fold.by.line',1) > 0 then - if filesize <= threshold_by_line then - return fold_by_line(text,start_pos,start_line,start_level,lexer) - end - end - return { } -end - --- The following code is mostly unchanged: - -local function add_rule(lexer, id, rule) - if not lexer._RULES then - lexer._RULES = {} - lexer._RULEORDER = {} - end - lexer._RULES[id] = rule - lexer._RULEORDER[#lexer._RULEORDER + 1] = id -end - -local function add_style(lexer, token_name, style) - local len = lexer._STYLES.len - if len == 32 then - len = len + 8 - end - if len >= 128 then - print('Too many styles defined (128 MAX)') - end - lexer._TOKENS[token_name] = len - lexer._STYLES[len] = style - lexer._STYLES.len = len + 1 -end - -local function join_tokens(lexer) - local patterns, order = lexer._RULES, lexer._RULEORDER - local token_rule = patterns[order[1]] - for i=2,#order do - token_rule = token_rule + patterns[order[i]] - end - lexer._TOKENRULE = token_rule - return lexer._TOKENRULE -end - -local function add_lexer(grammar, lexer, token_rule) - local token_rule = join_tokens(lexer) - local lexer_name = lexer._NAME - local children = lexer._CHILDREN - for i=1,#children do - local child = children[i] - if child._CHILDREN then - add_lexer(grammar, child) - end - local child_name = child._NAME - local rules = child._EMBEDDEDRULES[lexer_name] - local rules_token_rule = grammar['__'..child_name] or rules.token_rule - grammar[child_name] = (-rules.end_rule * rules_token_rule)^0 * rules.end_rule^-1 * V(lexer_name) - local embedded_child = '_' .. child_name - grammar[embedded_child] = rules.start_rule * (-rules.end_rule * rules_token_rule)^0 * rules.end_rule^-1 - token_rule = V(embedded_child) + token_rule - end - grammar['__' .. lexer_name] = token_rule - grammar[lexer_name] = token_rule^0 -end - -local function build_grammar(lexer, initial_rule) - local children = lexer._CHILDREN - if children then - local lexer_name = lexer._NAME - if not initial_rule then - initial_rule = lexer_name - end - local grammar = { initial_rule } - add_lexer(grammar, lexer) - lexer._INITIALRULE = initial_rule - lexer._GRAMMAR = Ct(P(grammar)) - else - lexer._GRAMMAR = Ct(join_tokens(lexer)^0) - end -end - --- so far. We need these local functions in the next one. --- --- Before 3.24 we had tokens[..] = { category, position }, now it's a two values. - -local lineparsers = { } - -function context.lex(text,init_style) - local lexer = global._LEXER - local grammar = lexer._GRAMMAR - if not grammar then - return { } - elseif lexer._LEXBYLINE then -- we could keep token - local tokens = { } - local offset = 0 - local noftokens = 0 - -- -- pre 3.24 - -- - -- for line in gmatch(text,'[^\r\n]*\r?\n?') do -- could be an lpeg - -- local line_tokens = lpegmatch(grammar,line) - -- if line_tokens then - -- for i=1,#line_tokens do - -- local token = line_tokens[i] - -- token[2] = token[2] + offset - -- noftokens = noftokens + 1 - -- tokens[noftokens] = token - -- end - -- end - -- offset = offset + #line - -- if noftokens > 0 and tokens[noftokens][2] ~= offset then - -- noftokens = noftokens + 1 - -- tokens[noftokens] = { 'default', offset + 1 } - -- end - -- end - - -- for line in gmatch(text,'[^\r\n]*\r?\n?') do - -- local line_tokens = lpegmatch(grammar,line) - -- if line_tokens then - -- for i=1,#line_tokens,2 do - -- noftokens = noftokens + 1 - -- tokens[noftokens] = line_tokens[i] - -- noftokens = noftokens + 1 - -- tokens[noftokens] = line_tokens[i + 1] + offset - -- end - -- end - -- offset = offset + #line - -- if noftokens > 0 and tokens[noftokens] ~= offset then - -- noftokens = noftokens + 1 - -- tokens[noftokens] = 'default' - -- noftokens = noftokens + 1 - -- tokens[noftokens] = offset + 1 - -- end - -- end - - local lineparser = lineparsers[lexer] - if not lineparser then -- probably a cmt is more efficient - lineparser = C((1-newline)^0 * newline) / function(line) - local length = #line - local line_tokens = length > 0 and lpegmatch(grammar,line) - if line_tokens then - for i=1,#line_tokens,2 do - noftokens = noftokens + 1 - tokens[noftokens] = line_tokens[i] - noftokens = noftokens + 1 - tokens[noftokens] = line_tokens[i + 1] + offset - end - end - offset = offset + length - if noftokens > 0 and tokens[noftokens] ~= offset then - noftokens = noftokens + 1 - tokens[noftokens] = 'default' - noftokens = noftokens + 1 - tokens[noftokens] = offset + 1 - end - end - lineparser = lineparser^0 - lineparsers[lexer] = lineparser - end - lpegmatch(lineparser,text) - return tokens - - elseif lexer._CHILDREN then - -- as we cannot print, tracing is not possible ... this might change as we can as well - -- generate them all in one go (sharing as much as possible) - local hash = lexer._HASH -- hm, was _hash - if not hash then - hash = { } - lexer._HASH = hash - end - grammar = hash[init_style] - if grammar then - lexer._GRAMMAR = grammar - else - for style, style_num in next, lexer._TOKENS do - if style_num == init_style then - -- the name of the lexers is filtered from the whitespace - -- specification - local lexer_name = match(style,'^(.+)_whitespace') or lexer._NAME - if lexer._INITIALRULE ~= lexer_name then - grammar = hash[lexer_name] - if not grammar then - build_grammar(lexer,lexer_name) - grammar = lexer._GRAMMAR - hash[lexer_name] = grammar - end - end - break - end - end - grammar = grammar or lexer._GRAMMAR - hash[init_style] = grammar - end - return lpegmatch(grammar,text) - else - return lpegmatch(grammar,text) - end -end - --- todo: keywords: one lookup and multiple matches - --- function context.token(name, patt) --- return Ct(patt * Cc(name) * Cp()) --- end --- --- -- hm, changed in 3.24 .. no longer a table - -function context.token(name, patt) - return patt * Cc(name) * Cp() -end - -lexer.fold = context.fold -lexer.lex = context.lex -lexer.token = context.token -lexer.exact_match = context.exact_match - --- helper .. alas ... the lexer's lua instance is rather crippled .. not even --- math is part of it - -local floor = math and math.floor -local char = string.char - -if not floor then - - floor = function(n) - return tonumber(string.format("%d",n)) - end - - math = math or { } - - math.floor = floor - -end - -local function utfchar(n) - if n < 0x80 then - return char(n) - elseif n < 0x800 then - return char( - 0xC0 + floor(n/0x40), - 0x80 + (n % 0x40) - ) - elseif n < 0x10000 then - return char( - 0xE0 + floor(n/0x1000), - 0x80 + (floor(n/0x40) % 0x40), - 0x80 + (n % 0x40) - ) - elseif n < 0x40000 then - return char( - 0xF0 + floor(n/0x40000), - 0x80 + floor(n/0x1000), - 0x80 + (floor(n/0x40) % 0x40), - 0x80 + (n % 0x40) - ) - else - -- return char( - -- 0xF1 + floor(n/0x1000000), - -- 0x80 + floor(n/0x40000), - -- 0x80 + floor(n/0x1000), - -- 0x80 + (floor(n/0x40) % 0x40), - -- 0x80 + (n % 0x40) - -- ) - return "?" - end -end - -context.utfchar = utfchar - --- a helper from l-lpeg: - -local gmatch = string.gmatch - -local function make(t) - local p - for k, v in next, t do - if not p then - if next(v) then - p = P(k) * make(v) - else - p = P(k) - end - else - if next(v) then - p = p + P(k) * make(v) - else - p = p + P(k) - end - end - end - return p -end - -function lpeg.utfchartabletopattern(list) - local tree = { } - for i=1,#list do - local t = tree - for c in gmatch(list[i],".") do - if not t[c] then - t[c] = { } - end - t = t[c] - end - end - return make(tree) -end - --- patterns.invisibles = --- P(utfchar(0x00A0)) -- nbsp --- + P(utfchar(0x2000)) -- enquad --- + P(utfchar(0x2001)) -- emquad --- + P(utfchar(0x2002)) -- enspace --- + P(utfchar(0x2003)) -- emspace --- + P(utfchar(0x2004)) -- threeperemspace --- + P(utfchar(0x2005)) -- fourperemspace --- + P(utfchar(0x2006)) -- sixperemspace --- + P(utfchar(0x2007)) -- figurespace --- + P(utfchar(0x2008)) -- punctuationspace --- + P(utfchar(0x2009)) -- breakablethinspace --- + P(utfchar(0x200A)) -- hairspace --- + P(utfchar(0x200B)) -- zerowidthspace --- + P(utfchar(0x202F)) -- narrownobreakspace --- + P(utfchar(0x205F)) -- math thinspace - -patterns.invisibles = lpeg.utfchartabletopattern { - utfchar(0x00A0), -- nbsp - utfchar(0x2000), -- enquad - utfchar(0x2001), -- emquad - utfchar(0x2002), -- enspace - utfchar(0x2003), -- emspace - utfchar(0x2004), -- threeperemspace - utfchar(0x2005), -- fourperemspace - utfchar(0x2006), -- sixperemspace - utfchar(0x2007), -- figurespace - utfchar(0x2008), -- punctuationspace - utfchar(0x2009), -- breakablethinspace - utfchar(0x200A), -- hairspace - utfchar(0x200B), -- zerowidthspace - utfchar(0x202F), -- narrownobreakspace - utfchar(0x205F), -- math thinspace -} - --- now we can make: - -patterns.iwordtoken = patterns.wordtoken - patterns.invisibles -patterns.iwordpattern = patterns.iwordtoken^3 - --- require("themes/scite-context-theme") - --- In order to deal with some bug in additional styles (I have no cue what is --- wrong, but additional styles get ignored and clash somehow) I just copy the --- original lexer code ... see original for comments. diff --git a/context/data/scite/lexers/data/scite-context-data-context.lua b/context/data/scite/lexers/data/scite-context-data-context.lua deleted file mode 100644 index 0d577c8da..000000000 --- a/context/data/scite/lexers/data/scite-context-data-context.lua +++ /dev/null @@ -1,4 +0,0 @@ -return { - ["constants"]={ "zerocount", "minusone", "minustwo", "plusone", "plustwo", "plusthree", "plusfour", "plusfive", "plussix", "plusseven", "pluseight", "plusnine", "plusten", "plussixteen", "plushundred", "plusthousand", "plustenthousand", "plustwentythousand", "medcard", "maxcard", "zeropoint", "onepoint", "halfapoint", "onebasepoint", "maxdimen", "scaledpoint", "thousandpoint", "points", "halfpoint", "zeroskip", "zeromuskip", "onemuskip", "pluscxxvii", "pluscxxviii", "pluscclv", "pluscclvi", "normalpagebox", "endoflinetoken", "outputnewlinechar", "emptytoks", "empty", "undefined", "voidbox", "emptybox", "emptyvbox", "emptyhbox", "bigskipamount", "medskipamount", "smallskipamount", "fmtname", "fmtversion", "texengine", "texenginename", "texengineversion", "luatexengine", "pdftexengine", "xetexengine", "unknownengine", "etexversion", "pdftexversion", "xetexversion", "xetexrevision", "activecatcode", "bgroup", "egroup", "endline", "conditionaltrue", "conditionalfalse", "attributeunsetvalue", "uprotationangle", "rightrotationangle", "downrotationangle", "leftrotationangle", "inicatcodes", "ctxcatcodes", "texcatcodes", "notcatcodes", "txtcatcodes", "vrbcatcodes", "prtcatcodes", "nilcatcodes", "luacatcodes", "tpacatcodes", "tpbcatcodes", "xmlcatcodes", "escapecatcode", "begingroupcatcode", "endgroupcatcode", "mathshiftcatcode", "alignmentcatcode", "endoflinecatcode", "parametercatcode", "superscriptcatcode", "subscriptcatcode", "ignorecatcode", "spacecatcode", "lettercatcode", "othercatcode", "activecatcode", "commentcatcode", "invalidcatcode", "tabasciicode", "newlineasciicode", "formfeedasciicode", "endoflineasciicode", "endoffileasciicode", "spaceasciicode", "hashasciicode", "dollarasciicode", "commentasciicode", "ampersandasciicode", "colonasciicode", "backslashasciicode", "circumflexasciicode", "underscoreasciicode", "leftbraceasciicode", "barasciicode", "rightbraceasciicode", "tildeasciicode", "delasciicode", "lessthanasciicode", "morethanasciicode", "doublecommentsignal", "atsignasciicode", "exclamationmarkasciicode", "questionmarkasciicode", "doublequoteasciicode", "singlequoteasciicode", "forwardslashasciicode", "primeasciicode", "activemathcharcode", "activetabtoken", "activeformfeedtoken", "activeendoflinetoken", "batchmodecode", "nonstopmodecode", "scrollmodecode", "errorstopmodecode", "bottomlevelgroupcode", "simplegroupcode", "hboxgroupcode", "adjustedhboxgroupcode", "vboxgroupcode", "vtopgroupcode", "aligngroupcode", "noaligngroupcode", "outputgroupcode", "mathgroupcode", "discretionarygroupcode", "insertgroupcode", "vcentergroupcode", "mathchoicegroupcode", "semisimplegroupcode", "mathshiftgroupcode", "mathleftgroupcode", "vadjustgroupcode", "charnodecode", "hlistnodecode", "vlistnodecode", "rulenodecode", "insertnodecode", "marknodecode", "adjustnodecode", "ligaturenodecode", "discretionarynodecode", "whatsitnodecode", "mathnodecode", "gluenodecode", "kernnodecode", "penaltynodecode", "unsetnodecode", "mathsnodecode", "charifcode", "catifcode", "numifcode", "dimifcode", "oddifcode", "vmodeifcode", "hmodeifcode", "mmodeifcode", "innerifcode", "voidifcode", "hboxifcode", "vboxifcode", "xifcode", "eofifcode", "trueifcode", "falseifcode", "caseifcode", "definedifcode", "csnameifcode", "fontcharifcode", "fontslantperpoint", "fontinterwordspace", "fontinterwordstretch", "fontinterwordshrink", "fontexheight", "fontemwidth", "fontextraspace", "slantperpoint", "interwordspace", "interwordstretch", "interwordshrink", "exheight", "emwidth", "extraspace", "mathsupdisplay", "mathsupnormal", "mathsupcramped", "mathsubnormal", "mathsubcombined", "mathaxisheight", "startmode", "stopmode", "startnotmode", "stopnotmode", "startmodeset", "stopmodeset", "doifmode", "doifmodeelse", "doifnotmode", "startallmodes", "stopallmodes", "startnotallmodes", "stopnotallmodes", "doifallmodes", "doifallmodeselse", "doifnotallmodes", "startenvironment", "stopenvironment", "environment", "startcomponent", "stopcomponent", "component", "startproduct", "stopproduct", "product", "startproject", "stopproject", "project", "starttext", "stoptext", "startnotext", "stopnotext", "startdocument", "stopdocument", "documentvariable", "setupdocument", "startmodule", "stopmodule", "usemodule", "usetexmodule", "useluamodule", "setupmodule", "currentmoduleparameter", "moduleparameter", "startTEXpage", "stopTEXpage", "enablemode", "disablemode", "preventmode", "globalenablemode", "globaldisablemode", "globalpreventmode", "pushmode", "popmode", "typescriptone", "typescripttwo", "typescriptthree", "mathsizesuffix", "mathordcode", "mathopcode", "mathbincode", "mathrelcode", "mathopencode", "mathclosecode", "mathpunctcode", "mathalphacode", "mathinnercode", "mathnothingcode", "mathlimopcode", "mathnolopcode", "mathboxcode", "mathchoicecode", "mathaccentcode", "mathradicalcode", "constantnumber", "constantnumberargument", "constantdimen", "constantdimenargument", "constantemptyargument", "continueifinputfile", "luastringsep", "!!bs", "!!es", "lefttorightmark", "righttoleftmark", "breakablethinspace", "nobreakspace", "narrownobreakspace", "zerowidthnobreakspace", "ideographicspace", "ideographichalffillspace", "twoperemspace", "threeperemspace", "fourperemspace", "fiveperemspace", "sixperemspace", "figurespace", "punctuationspace", "hairspace", "zerowidthspace", "zerowidthnonjoiner", "zerowidthjoiner", "zwnj", "zwj" }, - ["helpers"]={ "startsetups", "stopsetups", "startxmlsetups", "stopxmlsetups", "startluasetups", "stopluasetups", "starttexsetups", "stoptexsetups", "startrawsetups", "stoprawsetups", "startlocalsetups", "stoplocalsetups", "starttexdefinition", "stoptexdefinition", "starttexcode", "stoptexcode", "startcontextcode", "stopcontextcode", "doifsetupselse", "doifsetups", "doifnotsetups", "setup", "setups", "texsetup", "xmlsetup", "luasetup", "directsetup", "doifelsecommandhandler", "doifnotcommandhandler", "doifcommandhandler", "newmode", "setmode", "resetmode", "newsystemmode", "setsystemmode", "resetsystemmode", "pushsystemmode", "popsystemmode", "booleanmodevalue", "newcount", "newdimen", "newskip", "newmuskip", "newbox", "newtoks", "newread", "newwrite", "newmarks", "newinsert", "newattribute", "newif", "newlanguage", "newfamily", "newfam", "newhelp", "then", "begcsname", "strippedcsname", "firstargumentfalse", "firstargumenttrue", "secondargumentfalse", "secondargumenttrue", "thirdargumentfalse", "thirdargumenttrue", "fourthargumentfalse", "fourthargumenttrue", "fifthargumentfalse", "fifthsargumenttrue", "sixthargumentfalse", "sixtsargumenttrue", "doglobal", "dodoglobal", "redoglobal", "resetglobal", "donothing", "dontcomplain", "forgetall", "donetrue", "donefalse", "htdp", "unvoidbox", "hfilll", "vfilll", "mathbox", "mathlimop", "mathnolop", "mathnothing", "mathalpha", "currentcatcodetable", "defaultcatcodetable", "catcodetablename", "newcatcodetable", "startcatcodetable", "stopcatcodetable", "startextendcatcodetable", "stopextendcatcodetable", "pushcatcodetable", "popcatcodetable", "restorecatcodes", "setcatcodetable", "letcatcodecommand", "defcatcodecommand", "uedcatcodecommand", "hglue", "vglue", "hfillneg", "vfillneg", "hfilllneg", "vfilllneg", "ruledhss", "ruledhfil", "ruledhfill", "ruledhfilneg", "ruledhfillneg", "normalhfillneg", "ruledvss", "ruledvfil", "ruledvfill", "ruledvfilneg", "ruledvfillneg", "normalvfillneg", "ruledhbox", "ruledvbox", "ruledvtop", "ruledvcenter", "ruledmbox", "ruledhskip", "ruledvskip", "ruledkern", "ruledmskip", "ruledmkern", "ruledhglue", "ruledvglue", "normalhglue", "normalvglue", "ruledpenalty", "filledhboxb", "filledhboxr", "filledhboxg", "filledhboxc", "filledhboxm", "filledhboxy", "filledhboxk", "scratchcounter", "globalscratchcounter", "scratchdimen", "globalscratchdimen", "scratchskip", "globalscratchskip", "scratchmuskip", "globalscratchmuskip", "scratchtoks", "globalscratchtoks", "scratchbox", "globalscratchbox", "normalbaselineskip", "normallineskip", "normallineskiplimit", "availablehsize", "localhsize", "setlocalhsize", "nextbox", "dowithnextbox", "dowithnextboxcs", "dowithnextboxcontent", "dowithnextboxcontentcs", "scratchwidth", "scratchheight", "scratchdepth", "scratchoffset", "scratchdistance", "scratchhsize", "scratchvsize", "scratchxoffset", "scratchyoffset", "scratchhoffset", "scratchvoffset", "scratchxposition", "scratchyposition", "scratchtopoffset", "scratchbottomoffset", "scratchleftoffset", "scratchrightoffset", "scratchcounterone", "scratchcountertwo", "scratchcounterthree", "scratchdimenone", "scratchdimentwo", "scratchdimenthree", "scratchskipone", "scratchskiptwo", "scratchskipthree", "scratchmuskipone", "scratchmuskiptwo", "scratchmuskipthree", "scratchtoksone", "scratchtokstwo", "scratchtoksthree", "scratchboxone", "scratchboxtwo", "scratchboxthree", "scratchnx", "scratchny", "scratchmx", "scratchmy", "scratchunicode", "scratchleftskip", "scratchrightskip", "scratchtopskip", "scratchbottomskip", "doif", "doifnot", "doifelse", "doifinset", "doifnotinset", "doifinsetelse", "doifnextcharelse", "doifnextoptionalelse", "doifnextbgroupelse", "doifnextparenthesiselse", "doiffastoptionalcheckelse", "doifundefinedelse", "doifdefinedelse", "doifundefined", "doifdefined", "doifelsevalue", "doifvalue", "doifnotvalue", "doifnothing", "doifsomething", "doifelsenothing", "doifsomethingelse", "doifvaluenothing", "doifvaluesomething", "doifelsevaluenothing", "doifdimensionelse", "doifnumberelse", "doifnumber", "doifnotnumber", "doifcommonelse", "doifcommon", "doifnotcommon", "doifinstring", "doifnotinstring", "doifinstringelse", "doifassignmentelse", "docheckassignment", "tracingall", "tracingnone", "loggingall", "removetoks", "appendtoks", "prependtoks", "appendtotoks", "prependtotoks", "to", "endgraf", "endpar", "everyendpar", "reseteverypar", "finishpar", "empty", "null", "space", "quad", "enspace", "obeyspaces", "obeylines", "obeyedspace", "obeyedline", "normalspace", "executeifdefined", "singleexpandafter", "doubleexpandafter", "tripleexpandafter", "dontleavehmode", "removelastspace", "removeunwantedspaces", "keepunwantedspaces", "wait", "writestatus", "define", "defineexpandable", "redefine", "setmeasure", "setemeasure", "setgmeasure", "setxmeasure", "definemeasure", "freezemeasure", "measure", "measured", "installcorenamespace", "getvalue", "getuvalue", "setvalue", "setevalue", "setgvalue", "setxvalue", "letvalue", "letgvalue", "resetvalue", "undefinevalue", "ignorevalue", "setuvalue", "setuevalue", "setugvalue", "setuxvalue", "globallet", "glet", "udef", "ugdef", "uedef", "uxdef", "checked", "unique", "getparameters", "geteparameters", "getgparameters", "getxparameters", "forgetparameters", "copyparameters", "getdummyparameters", "dummyparameter", "directdummyparameter", "setdummyparameter", "letdummyparameter", "usedummystyleandcolor", "usedummystyleparameter", "usedummycolorparameter", "processcommalist", "processcommacommand", "quitcommalist", "quitprevcommalist", "processaction", "processallactions", "processfirstactioninset", "processallactionsinset", "unexpanded", "expanded", "startexpanded", "stopexpanded", "protected", "protect", "unprotect", "firstofoneargument", "firstoftwoarguments", "secondoftwoarguments", "firstofthreearguments", "secondofthreearguments", "thirdofthreearguments", "firstoffourarguments", "secondoffourarguments", "thirdoffourarguments", "fourthoffourarguments", "firstoffivearguments", "secondoffivearguments", "thirdoffivearguments", "fourthoffivearguments", "fifthoffivearguments", "firstofsixarguments", "secondofsixarguments", "thirdofsixarguments", "fourthofsixarguments", "fifthofsixarguments", "sixthofsixarguments", "firstofoneunexpanded", "gobbleoneargument", "gobbletwoarguments", "gobblethreearguments", "gobblefourarguments", "gobblefivearguments", "gobblesixarguments", "gobblesevenarguments", "gobbleeightarguments", "gobbleninearguments", "gobbletenarguments", "gobbleoneoptional", "gobbletwooptionals", "gobblethreeoptionals", "gobblefouroptionals", "gobblefiveoptionals", "dorecurse", "doloop", "exitloop", "dostepwiserecurse", "recurselevel", "recursedepth", "dofastloopcs", "dowith", "newconstant", "setnewconstant", "setconstant", "setconstantvalue", "newconditional", "settrue", "setfalse", "settruevalue", "setfalsevalue", "newmacro", "setnewmacro", "newfraction", "newsignal", "dosingleempty", "dodoubleempty", "dotripleempty", "doquadrupleempty", "doquintupleempty", "dosixtupleempty", "doseventupleempty", "dosingleargument", "dodoubleargument", "dotripleargument", "doquadrupleargument", "doquintupleargument", "dosixtupleargument", "doseventupleargument", "dosinglegroupempty", "dodoublegroupempty", "dotriplegroupempty", "doquadruplegroupempty", "doquintuplegroupempty", "permitspacesbetweengroups", "dontpermitspacesbetweengroups", "nopdfcompression", "maximumpdfcompression", "normalpdfcompression", "modulonumber", "dividenumber", "getfirstcharacter", "doiffirstcharelse", "startnointerference", "stopnointerference", "twodigits", "threedigits", "leftorright", "strut", "setstrut", "strutbox", "strutht", "strutdp", "strutwd", "struthtdp", "begstrut", "endstrut", "lineheight", "ordordspacing", "ordopspacing", "ordbinspacing", "ordrelspacing", "ordopenspacing", "ordclosespacing", "ordpunctspacing", "ordinnerspacing", "opordspacing", "opopspacing", "opbinspacing", "oprelspacing", "opopenspacing", "opclosespacing", "oppunctspacing", "opinnerspacing", "binordspacing", "binopspacing", "binbinspacing", "binrelspacing", "binopenspacing", "binclosespacing", "binpunctspacing", "bininnerspacing", "relordspacing", "relopspacing", "relbinspacing", "relrelspacing", "relopenspacing", "relclosespacing", "relpunctspacing", "relinnerspacing", "openordspacing", "openopspacing", "openbinspacing", "openrelspacing", "openopenspacing", "openclosespacing", "openpunctspacing", "openinnerspacing", "closeordspacing", "closeopspacing", "closebinspacing", "closerelspacing", "closeopenspacing", "closeclosespacing", "closepunctspacing", "closeinnerspacing", "punctordspacing", "punctopspacing", "punctbinspacing", "punctrelspacing", "punctopenspacing", "punctclosespacing", "punctpunctspacing", "punctinnerspacing", "innerordspacing", "inneropspacing", "innerbinspacing", "innerrelspacing", "inneropenspacing", "innerclosespacing", "innerpunctspacing", "innerinnerspacing", "normalreqno", "startimath", "stopimath", "normalstartimath", "normalstopimath", "startdmath", "stopdmath", "normalstartdmath", "normalstopdmath", "uncramped", "cramped", "triggermathstyle", "mathstylefont", "mathsmallstylefont", "mathstyleface", "mathsmallstyleface", "mathstylecommand", "mathpalette", "mathstylehbox", "mathstylevbox", "mathstylevcenter", "mathstylevcenteredhbox", "mathstylevcenteredvbox", "mathtext", "setmathsmalltextbox", "setmathtextbox", "triggerdisplaystyle", "triggertextstyle", "triggerscriptstyle", "triggerscriptscriptstyle", "triggeruncrampedstyle", "triggercrampedstyle", "triggersmallstyle", "triggeruncrampedsmallstyle", "triggercrampedsmallstyle", "triggerbigstyle", "triggeruncrampedbigstyle", "triggercrampedbigstyle", "luaexpr", "expdoifelse", "expdoif", "expdoifnot", "expdoifcommonelse", "expdoifinsetelse", "ctxdirectlua", "ctxlatelua", "ctxsprint", "ctxwrite", "ctxcommand", "ctxdirectcommand", "ctxlatecommand", "ctxreport", "ctxlua", "luacode", "lateluacode", "directluacode", "registerctxluafile", "ctxloadluafile", "luaversion", "luamajorversion", "luaminorversion", "ctxluacode", "luaconditional", "luaexpanded", "startluaparameterset", "stopluaparameterset", "luaparameterset", "definenamedlua", "obeylualines", "obeyluatokens", "startluacode", "stopluacode", "startlua", "stoplua", "carryoverpar", "assumelongusagecs", "Umathbotaccent", "righttolefthbox", "lefttorighthbox", "righttoleftvbox", "lefttorightvbox", "righttoleftvtop", "lefttorightvtop", "rtlhbox", "ltrhbox", "rtlvbox", "ltrvbox", "rtlvtop", "ltrvtop", "autodirhbox", "autodirvbox", "autodirvtop", "lefttoright", "righttoleft", "synchronizelayoutdirection", "synchronizedisplaydirection", "synchronizeinlinedirection", "lesshyphens", "morehyphens", "nohyphens", "dohyphens", "Ucheckedstartdisplaymath", "Ucheckedstopdisplaymath" }, -}
\ No newline at end of file diff --git a/context/data/scite/lexers/scite-context-lexer-mps.lua b/context/data/scite/lexers/scite-context-lexer-mps.lua deleted file mode 100644 index f0d88eb3b..000000000 --- a/context/data/scite/lexers/scite-context-lexer-mps.lua +++ /dev/null @@ -1,155 +0,0 @@ -local info = { - version = 1.002, - comment = "scintilla lpeg lexer for metafun", - author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", - copyright = "PRAGMA ADE / ConTeXt Development Team", - license = "see context related readme files", -} - -if not lexer._CONTEXTEXTENSIONS then require("scite-context-lexer") end - -local lexer = lexer -local global, string, table, lpeg = _G, string, table, lpeg -local token, exact_match = lexer.token, lexer.exact_match -local P, R, S, V, C, Cmt = lpeg.P, lpeg.R, lpeg.S, lpeg.V, lpeg.C, lpeg.Cmt -local type = type - -local metafunlexer = { _NAME = "mps", _FILENAME = "scite-context-lexer-mps" } -local whitespace = lexer.WHITESPACE -local context = lexer.context - -local metapostprimitives = { } -local metapostinternals = { } -local metapostshortcuts = { } -local metapostcommands = { } - -local metafuninternals = { } -local metafunshortcuts = { } -local metafuncommands = { } - -local mergedshortcuts = { } -local mergedinternals = { } - -do - - local definitions = context.loaddefinitions("scite-context-data-metapost") - - if definitions then - metapostprimitives = definitions.primitives or { } - metapostinternals = definitions.internals or { } - metapostshortcuts = definitions.shortcuts or { } - metapostcommands = definitions.commands or { } - end - - local definitions = context.loaddefinitions("scite-context-data-metafun") - - if definitions then - metafuninternals = definitions.internals or { } - metafunshortcuts = definitions.shortcuts or { } - metafuncommands = definitions.commands or { } - end - - for i=1,#metapostshortcuts do - mergedshortcuts[#mergedshortcuts+1] = metapostshortcuts[i] - end - for i=1,#metafunshortcuts do - mergedshortcuts[#mergedshortcuts+1] = metafunshortcuts[i] - end - - for i=1,#metapostinternals do - mergedinternals[#mergedinternals+1] = metapostinternals[i] - end - for i=1,#metafuninternals do - mergedinternals[#mergedinternals+1] = metafuninternals[i] - end - -end - -local space = lexer.space -- S(" \n\r\t\f\v") -local any = lexer.any - -local dquote = P('"') -local cstoken = R("az","AZ") + P("_") -local mptoken = R("az","AZ") -local leftbrace = P("{") -local rightbrace = P("}") -local number = context.patterns.real - -local cstokentex = R("az","AZ","\127\255") + S("@!?_") - --- we could collapse as in tex - -local spacing = token(whitespace, space^1) -local rest = token('default', any) -local comment = token('comment', P('%') * (1-S("\n\r"))^0) -local internal = token('reserved', exact_match(mergedshortcuts,false)) -local shortcut = token('data', exact_match(mergedinternals)) -local helper = token('command', exact_match(metafuncommands)) -local plain = token('plain', exact_match(metapostcommands)) -local quoted = token('quote', dquote) - * token('string', P(1-dquote)^0) - * token('quote', dquote) -local texstuff = token('quote', P("btex ") + P("verbatimtex ")) - * token('string', P(1-P(" etex"))^0) - * token('quote', P(" etex")) -local primitive = token('primitive', exact_match(metapostprimitives)) -local identifier = token('default', cstoken^1) -local number = token('number', number) -local grouping = token('grouping', S("()[]{}")) -- can be an option -local special = token('special', S("#()[]{}<>=:\"")) -- or else := <> etc split -local texlike = token('warning', P("\\") * cstokentex^1) -local extra = token('extra', P("+-+") + P("++") + S("`~%^&_-+*/\'|\\")) - -local nested = P { leftbrace * (V(1) + (1-rightbrace))^0 * rightbrace } -local texlike = token('embedded', P("\\") * (P("MP") + P("mp")) * mptoken^1) - * spacing^0 - * token('grouping', leftbrace) - * token('rest', (nested + (1-rightbrace))^0 ) - * token('grouping', rightbrace) - + token('warning', P("\\") * cstokentex^1) - -metafunlexer._rules = { - { 'whitespace', spacing }, - { 'comment', comment }, - { 'internal', internal }, - { 'shortcut', shortcut }, - { 'helper', helper }, - { 'plain', plain }, - { 'primitive', primitive }, - { 'texstuff', texstuff }, - { 'identifier', identifier }, - { 'number', number }, - { 'quoted', quoted }, - -- { 'grouping', grouping }, -- can be an option - { 'special', special }, - { 'texlike', texlike }, - { 'extra', extra }, - { 'rest', rest }, -} - -metafunlexer._tokenstyles = context.styleset - -metafunlexer._foldpattern = R("az")^2 -- separate entry else interference - -metafunlexer._foldsymbols = { - _patterns = { - '[a-z][a-z]+', - }, - ["primitive"] = { - ["beginfig"] = 1, - ["endfig"] = -1, - ["def"] = 1, - ["vardef"] = 1, - ["primarydef"] = 1, - ["secondarydef" ] = 1, - ["tertiarydef"] = 1, - ["enddef"] = -1, - ["if"] = 1, - ["fi"] = -1, - ["for"] = 1, - ["forever"] = 1, - ["endfor"] = -1, - } -} - -return metafunlexer diff --git a/context/data/scite/lexers/scite-context-lexer-pdf-object.lua b/context/data/scite/lexers/scite-context-lexer-pdf-object.lua deleted file mode 100644 index 6d0b6d8da..000000000 --- a/context/data/scite/lexers/scite-context-lexer-pdf-object.lua +++ /dev/null @@ -1,117 +0,0 @@ -local info = { - version = 1.002, - comment = "scintilla lpeg lexer for pdf", - author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", - copyright = "PRAGMA ADE / ConTeXt Development Team", - license = "see context related readme files", -} - -local lexer = lexer -local token = lexer.token -local P, R, S, C, V = lpeg.P, lpeg.R, lpeg.S, lpeg.C, lpeg.V - -local pdfobjectlexer = { _NAME = "pdf-object", _FILENAME = "scite-context-lexer-pdf-object" } -local whitespace = lexer.WHITESPACE -- triggers states -local context = lexer.context -local patterns = context.patterns - -local space = lexer.space -local somespace = space^1 - -local newline = S("\n\r") -local real = patterns.real -local cardinal = patterns.cardinal - -local lparent = P("(") -local rparent = P(")") -local langle = P("<") -local rangle = P(">") -local escape = P("\\") -local anything = P(1) -local unicodetrigger = P("feff") - -local nametoken = 1 - space - S("<>/[]()") -local name = P("/") * nametoken^1 - -local p_string = P { ( escape * anything + lparent * V(1) * rparent + (1 - rparent) )^0 } - -local t_spacing = token(whitespace, space^1) -local t_spaces = token(whitespace, space^1)^0 - -local p_stream = P("stream") -local p_endstream = P("endstream") ------ p_obj = P("obj") -local p_endobj = P("endobj") -local p_reference = P("R") - -local p_objectnumber = patterns.cardinal -local p_comment = P('%') * (1-S("\n\r"))^0 - -local string = token("quote", lparent) - * token("string", p_string) - * token("quote", rparent) -local unicode = token("quote", langle) - * token("plain", unicodetrigger) - * token("string", (1-rangle)^1) - * token("quote", rangle) -local whatsit = token("quote", langle) - * token("string", (1-rangle)^1) - * token("quote", rangle) -local keyword = token("command", name) -local constant = token("constant", name) -local number = token('number', real) --- local reference = token("number", cardinal) --- * t_spacing --- * token("number", cardinal) -local reserved = token("number", P("true") + P("false") + P("NULL")) -local reference = token("warning", cardinal) - * t_spacing - * token("warning", cardinal) - * t_spacing - * token("keyword", p_reference) -local t_comment = token("comment", p_comment) - --- t_openobject = token("number", p_objectnumber) --- * t_spacing --- * token("number", p_objectnumber) --- * t_spacing --- * token("keyword", p_obj) -local t_closeobject = token("keyword", p_endobj) - -local t_opendictionary = token("grouping", P("<<")) -local t_closedictionary = token("grouping", P(">>")) - -local t_openarray = token("grouping", P("[")) -local t_closearray = token("grouping", P("]")) - -local t_stream = token("keyword", p_stream) --- * token("default", newline * (1-newline*p_endstream*newline)^1 * newline) - * token("default", (1 - p_endstream)^1) - * token("keyword", p_endstream) - -local t_dictionary = { "dictionary", - dictionary = t_opendictionary * (t_spaces * keyword * t_spaces * V("whatever"))^0 * t_spaces * t_closedictionary, - array = t_openarray * (t_spaces * V("whatever"))^0 * t_spaces * t_closearray, - whatever = V("dictionary") + V("array") + constant + reference + string + unicode + number + whatsit, - } - -local t_object = { "object", -- weird that we need to catch the end here (probably otherwise an invalid lpeg) - object = t_spaces * (V("dictionary") * t_spaces * t_stream^-1 + V("array") + V("number") + t_spaces) * t_spaces * t_closeobject, - dictionary = t_opendictionary * (t_spaces * keyword * t_spaces * V("whatever"))^0 * t_spaces * t_closedictionary, - array = t_openarray * (t_spaces * V("whatever"))^0 * t_spaces * t_closearray, - number = number, - whatever = V("dictionary") + V("array") + constant + reference + string + unicode + number + reserved + whatsit, - } - -pdfobjectlexer._shared = { - dictionary = t_dictionary, -} - -pdfobjectlexer._rules = { - { 'whitespace', t_spacing }, - { 'object', t_object }, -} - -pdfobjectlexer._tokenstyles = context.styleset - -return pdfobjectlexer diff --git a/context/data/scite/lexers/scite-context-lexer-pdf-xref.lua b/context/data/scite/lexers/scite-context-lexer-pdf-xref.lua deleted file mode 100644 index f205e9130..000000000 --- a/context/data/scite/lexers/scite-context-lexer-pdf-xref.lua +++ /dev/null @@ -1,51 +0,0 @@ -local info = { - version = 1.002, - comment = "scintilla lpeg lexer for pdf xref", - author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", - copyright = "PRAGMA ADE / ConTeXt Development Team", - license = "see context related readme files", -} - -local lexer = lexer -local token = lexer.token -local P, R = lpeg.P, lpeg.R - --- xref --- cardinal cardinal [character] --- .. --- %%EOF | startxref | trailer - -local pdfxreflexer = { _NAME = "pdf-xref", _FILENAME = "scite-context-lexer-pdf-xref" } -local whitespace = lexer.WHITESPACE -- triggers states -local context = lexer.context -local patterns = context.patterns - -local pdfobjectlexer = lexer.load("scite-context-lexer-pdf-object") - -local spacing = patterns.spacing - -local t_spacing = token(whitespace, spacing) - -local p_trailer = P("trailer") - -local t_number = token("number", R("09")^1) - * t_spacing - * token("number", R("09")^1) - * t_spacing - * (token("keyword", R("az","AZ")) * t_spacing)^-1 - -local t_xref = t_number^1 - --- local t_xref = token("default", (1-p_trailer)^1) --- * token("keyword", p_trailer) --- * t_spacing --- * pdfobjectlexer._shared.dictionary - -pdfxreflexer._rules = { - { 'whitespace', t_spacing }, - { 'xref', t_xref }, -} - -pdfxreflexer._tokenstyles = context.styleset - -return pdfxreflexer diff --git a/context/data/scite/lexers/scite-context-lexer-pdf.lua b/context/data/scite/lexers/scite-context-lexer-pdf.lua deleted file mode 100644 index 685fdb16e..000000000 --- a/context/data/scite/lexers/scite-context-lexer-pdf.lua +++ /dev/null @@ -1,80 +0,0 @@ -local info = { - version = 1.002, - comment = "scintilla lpeg lexer for pdf", - author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", - copyright = "PRAGMA ADE / ConTeXt Development Team", - license = "see context related readme files", -} - -if not lexer._CONTEXTEXTENSIONS then require("scite-context-lexer") end - -local lexer = lexer -local token = lexer.token -local P, R, S = lpeg.P, lpeg.R, lpeg.S - -local pdflexer = { _NAME = "pdf", _FILENAME = "scite-context-lexer-pdf" } -local whitespace = lexer.WHITESPACE -- triggers states - -local pdfobjectlexer = lexer.load("scite-context-lexer-pdf-object") -local pdfxreflexer = lexer.load("scite-context-lexer-pdf-xref") - -local context = lexer.context -local patterns = context.patterns - -local space = patterns.space -local spacing = patterns.spacing -local nospacing = patterns.nospacing -local anything = patterns.anything -local restofline = patterns.restofline - -local t_spacing = token(whitespace, spacing) -local t_rest = token("default", nospacing) -- anything - -local p_obj = P("obj") -local p_endobj = P("endobj") -local p_xref = P("xref") -local p_startxref = P("startxref") -local p_eof = P("%%EOF") -local p_trailer = P("trailer") - -local p_objectnumber = patterns.cardinal -local p_comment = P('%') * restofline - -local t_comment = token("comment", p_comment) -local t_openobject = token("warning", p_objectnumber) - * t_spacing - * token("warning", p_objectnumber) - * t_spacing - * token("keyword", p_obj) - * t_spacing^0 -local t_closeobject = token("keyword", p_endobj) - --- We could do clever xref parsing but why should we (i.e. we should check for --- the xref body. As a pdf file is not edited, we could do without a nested --- lexer anyway. - -local t_trailer = token("keyword", p_trailer) - * t_spacing - * pdfobjectlexer._shared.dictionary - -local t_openxref = token("plain", p_xref) -local t_closexref = token("plain", p_startxref) - + token("comment", p_eof) - + t_trailer -local t_startxref = token("plain", p_startxref) - * t_spacing - * token("number", R("09")^1) - -lexer.embed_lexer(pdflexer, pdfobjectlexer, t_openobject, t_closeobject) -lexer.embed_lexer(pdflexer, pdfxreflexer, t_openxref, t_closexref) - -pdflexer._rules = { - { 'whitespace', t_spacing }, - { 'comment', t_comment }, - { 'xref', t_startxref }, - { 'rest', t_rest }, -} - -pdflexer._tokenstyles = context.styleset - -return pdflexer diff --git a/context/data/scite/lexers/scite-context-lexer-web.lua b/context/data/scite/lexers/scite-context-lexer-web.lua deleted file mode 100644 index f59a3205d..000000000 --- a/context/data/scite/lexers/scite-context-lexer-web.lua +++ /dev/null @@ -1,155 +0,0 @@ -local info = { - version = 1.002, - comment = "scintilla lpeg lexer for w", - author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", - copyright = "PRAGMA ADE / ConTeXt Development Team", - license = "see context related readme files", -} - --- this will be extended - -if not lexer._CONTEXTEXTENSIONS then require("scite-context-lexer") end - -local lexer = lexer -local token, style, colors, exact_match, no_style = lexer.token, lexer.style, lexer.colors, lexer.exact_match, lexer.style_nothing -local P, R, S, C, Cg, Cb, Cs, Cmt, lpegmatch = lpeg.P, lpeg.R, lpeg.S, lpeg.C, lpeg.Cg, lpeg.Cb, lpeg.Cs, lpeg.Cmt, lpeg.match -local setmetatable = setmetatable - -local weblexer = { _NAME = "web", _FILENAME = "scite-context-lexer-web" } -local whitespace = lexer.WHITESPACE -local context = lexer.context - -local keywords = { -- copied from cpp.lua - -- c - 'asm', 'auto', 'break', 'case', 'const', 'continue', 'default', 'do', 'else', - 'extern', 'false', 'for', 'goto', 'if', 'inline', 'register', 'return', - 'sizeof', 'static', 'switch', 'true', 'typedef', 'volatile', 'while', - 'restrict', - -- hm - '_Bool', '_Complex', '_Pragma', '_Imaginary', - -- c++. - 'catch', 'class', 'const_cast', 'delete', 'dynamic_cast', 'explicit', - 'export', 'friend', 'mutable', 'namespace', 'new', 'operator', 'private', - 'protected', 'public', 'signals', 'slots', 'reinterpret_cast', - 'static_assert', 'static_cast', 'template', 'this', 'throw', 'try', 'typeid', - 'typename', 'using', 'virtual' -} - -local datatypes = { -- copied from cpp.lua - 'bool', 'char', 'double', 'enum', 'float', 'int', 'long', 'short', 'signed', - 'struct', 'union', 'unsigned', 'void' -} - -local macros = { -- copied from cpp.lua - 'define', 'elif', 'else', 'endif', 'error', 'if', 'ifdef', 'ifndef', 'import', - 'include', 'line', 'pragma', 'undef', 'using', 'warning' -} - -local space = lexer.space -- S(" \n\r\t\f\v") -local any = lexer.any -local patterns = context.patterns -local restofline = patterns.restofline -local startofline = patterns.startofline - -local squote = P("'") -local dquote = P('"') -local escaped = P("\\") * P(1) -local slashes = P('//') -local begincomment = P("/*") -local endcomment = P("*/") -local percent = P("%") - -local spacing = token(whitespace, space^1) -local rest = token("default", any) - -local shortcomment = token("comment", slashes * restofline^0) -local longcomment = token("comment", begincomment * (1-endcomment)^0 * endcomment^-1) -local texcomment = token("comment", percent * restofline^0) - -local shortstring = token("quote", dquote) -- can be shared - * token("string", (escaped + (1-dquote))^0) - * token("quote", dquote) - + token("quote", squote) - * token("string", (escaped + (1-squote))^0) - * token("quote", squote) - -local integer = P("-")^-1 * (lexer.hex_num + lexer.dec_num) -local number = token("number", lexer.float + integer) - -local validword = R("AZ","az","__") * R("AZ","az","__","09")^0 - -local identifier = token("default",validword) - -local operator = token("special", S('+-*/%^!=<>;:{}[]().&|?~')) - ------ optionalspace = spacing^0 - -local p_keywords = exact_match(keywords ) -local p_datatypes = exact_match(datatypes) -local p_macros = exact_match(macros) - -local keyword = token("keyword", p_keywords) -local datatype = token("keyword", p_datatypes) -local identifier = token("default", validword) - -local macro = token("data", #P('#') * startofline * P('#') * S('\t ')^0 * p_macros) - -local beginweb = P("@") -local endweb = P("@c") - -local webcomment = token("comment", #beginweb * startofline * beginweb * (1-endweb)^0 * endweb) - -local texlexer = lexer.load('scite-context-lexer-tex') - -lexer.embed_lexer(weblexer, texlexer, #beginweb * startofline * token("comment",beginweb), token("comment",endweb)) - -weblexer._rules = { - { 'whitespace', spacing }, - { 'keyword', keyword }, - { 'type', datatype }, - { 'identifier', identifier }, - { 'string', shortstring }, - -- { 'webcomment', webcomment }, - { 'texcomment', texcomment }, - { 'longcomment', longcomment }, - { 'shortcomment', shortcomment }, - { 'number', number }, - { 'macro', macro }, - { 'operator', operator }, - { 'rest', rest }, -} - -weblexer._tokenstyles = context.styleset - -weblexer._foldpattern = P("/*") + P("*/") + S("{}") -- separate entry else interference - -weblexer._foldsymbols = { - _patterns = { - '[{}]', - '/%*', - '%*/', - }, - -- ["data"] = { -- macro - -- ['region'] = 1, - -- ['endregion'] = -1, - -- ['if'] = 1, - -- ['ifdef'] = 1, - -- ['ifndef'] = 1, - -- ['endif'] = -1, - -- }, - ["special"] = { -- operator - ['{'] = 1, - ['}'] = -1, - }, - ["comment"] = { - ['/*'] = 1, - ['*/'] = -1, - } -} - --- -- by indentation: --- -weblexer._foldpatterns = nil -weblexer._foldsymbols = nil - -return weblexer diff --git a/context/data/scite/lexers/scite-context-lexer-xml-comment.lua b/context/data/scite/lexers/scite-context-lexer-xml-comment.lua deleted file mode 100644 index 104310f94..000000000 --- a/context/data/scite/lexers/scite-context-lexer-xml-comment.lua +++ /dev/null @@ -1,42 +0,0 @@ -local info = { - version = 1.002, - comment = "scintilla lpeg lexer for xml comments", - author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", - copyright = "PRAGMA ADE / ConTeXt Development Team", - license = "see context related readme files", -} - -local lexer = lexer -local token = lexer.token -local P = lpeg.P - -local xmlcommentlexer = { _NAME = "xml-comment", _FILENAME = "scite-context-lexer-xml-comment" } -local whitespace = lexer.WHITESPACE -local context = lexer.context - -local space = lexer.space -local nospace = 1 - space - P("-->") - -local p_spaces = token(whitespace, space ^1) -local p_comment = token("comment", nospace^1) - -xmlcommentlexer._rules = { - { "whitespace", p_spaces }, - { "comment", p_comment }, -} - -xmlcommentlexer._tokenstyles = context.styleset - -xmlcommentlexer._foldpattern = P("<!--") + P("-->") - -xmlcommentlexer._foldsymbols = { - _patterns = { - "<%!%-%-", "%-%->", -- comments - }, - ["comment"] = { - ["<!--"] = 1, - ["-->" ] = -1, - } -} - -return xmlcommentlexer diff --git a/context/data/scite/lexers/scite-context-lexer-xml-script.lua b/context/data/scite/lexers/scite-context-lexer-xml-script.lua deleted file mode 100644 index fd1aae7f7..000000000 --- a/context/data/scite/lexers/scite-context-lexer-xml-script.lua +++ /dev/null @@ -1,30 +0,0 @@ -local info = { - version = 1.002, - comment = "scintilla lpeg lexer for xml cdata", - author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", - copyright = "PRAGMA ADE / ConTeXt Development Team", - license = "see context related readme files", -} - -local lexer = lexer -local token = lexer.token -local P = lpeg.P - -local xmlscriptlexer = { _NAME = "xml-script", _FILENAME = "scite-context-lexer-xml-script" } -local whitespace = lexer.WHITESPACE -- triggers states -local context = lexer.context - -local space = lexer.space -local nospace = 1 - space - (P("</") * P("script") + P("SCRIPT")) * P(">") - -local p_spaces = token(whitespace, space ^1) -local p_cdata = token("default", nospace^1) - -xmlscriptlexer._rules = { - { "whitespace", p_spaces }, - { "script", p_cdata }, -} - -xmlscriptlexer._tokenstyles = context.styleset - -return xmlscriptlexer diff --git a/context/data/scite/lexers/scite-context-lexer.lua b/context/data/scite/lexers/scite-context-lexer.lua deleted file mode 100644 index 5c7f40e7d..000000000 --- a/context/data/scite/lexers/scite-context-lexer.lua +++ /dev/null @@ -1,876 +0,0 @@ -local info = { - version = 1.324, - comment = "basics for scintilla lpeg lexer for context/metafun", - author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", - copyright = "PRAGMA ADE / ConTeXt Development Team", - license = "see context related readme files", - comment = "contains copyrighted code from mitchell.att.foicica.com", - -} - --- todo: move all code here --- todo: explore adapted dll ... properties + init --- todo: play with hotspot and other properties - --- wish: replace errorlist lexer (per language!) --- wish: access to all scite properties - --- The fold and lex functions are copied and patched from original code by Mitchell (see --- lexer.lua). All errors are mine. The ability to use lpeg is a real nice adition and a --- brilliant move. The code is a byproduct of the (mainly Lua based) textadept (still a --- rapidly moving target) that unfortunately misses a realtime output pane. On the other --- hand, SciTE is somewhat crippled by the fact that we cannot pop in our own (language --- dependent) lexer into the output pane (somehow the errorlist lexer is hard coded into --- the editor). Hopefully that will change some day. --- --- Starting with SciTE version 3.20 there is an issue with coloring. As we still lack --- a connection with scite itself (properties as well as printing to the log pane) we --- cannot trace this (on windows). As far as I can see, there are no fundamental --- changes in lexer.lua or LexLPeg.cxx so it must be in scintilla itself. So for the --- moment I stick to 3.10. Indicators are: no lexing of 'next' and 'goto <label>' in the --- Lua lexer and no brace highlighting either. Interesting is that it does work ok in --- the cld lexer (so the Lua code is okay). Also the fact that char-def.lua lexes fast --- is a signal that the lexer quits somewhere halfway. --- --- After checking 3.24 and adapting to the new lexer tables things are okay again. So, --- this version assumes 3.24 or higher. In 3.24 we have a different token result, i.e. no --- longer a { tag, pattern } but just two return values. I didn't check other changes but --- will do that when I run into issues. I had optimized these small tables by hashing which --- was more efficient but this is no longer needed. --- --- In 3.3.1 another major change took place: some helper constants (maybe they're no --- longer constants) and functions were moved into the lexer modules namespace but the --- functions are assigned to the Lua module afterward so we cannot alias them beforehand. --- We're probably getting close to a stable interface now. --- --- I've considered making a whole copy and patch the other functions too as we need --- an extra nesting model. However, I don't want to maintain too much. An unfortunate --- change in 3.03 is that no longer a script can be specified. This means that instead --- of loading the extensions via the properties file, we now need to load them in our --- own lexers, unless of course we replace lexer.lua completely (which adds another --- installation issue). --- --- Another change has been that _LEXERHOME is no longer available. It looks like more and --- more functionality gets dropped so maybe at some point we need to ship our own dll/so --- files. For instance, I'd like to have access to the current filename and other scite --- properties. For instance, we could cache some info with each file, if only we had --- knowledge of what file we're dealing with. --- --- For huge files folding can be pretty slow and I do have some large ones that I keep --- open all the time. Loading is normally no ussue, unless one has remembered the status --- and the cursor is at the last line of a 200K line file. Optimizing the fold function --- brought down loading of char-def.lua from 14 sec => 8 sec. Replacing the word_match --- function and optimizing the lex function gained another 2+ seconds. A 6 second load --- is quite ok for me. The changed lexer table structure (no subtables) brings loading --- down to a few seconds. --- --- When the lexer path is copied to the textadept lexer path, and the theme definition to --- theme path (as lexer.lua), the lexer works there as well. When I have time and motive --- I will make a proper setup file to tune the look and feel a bit and associate suffixes --- with the context lexer. The textadept editor has a nice style tracing option but lacks --- the tabs for selecting files that scite has. It also has no integrated run that pipes --- to the log pane (I wonder if it could borrow code from the console2 project). Interesting --- is that the jit version of textadept crashes on lexing large files (and does not feel --- faster either). --- --- Function load(lexer_name) starts with _M.WHITESPACE = lexer_name..'_whitespace' which --- means that we need to have it frozen at the moment we load another lexer. Because spacing --- is used to revert to a parent lexer we need to make sure that we load children as late --- as possible in order not to get the wrong whitespace trigger. This took me quite a while --- to figure out (not being that familiar with the internals). The lex and fold functions --- have been optimized. It is a pitty that there is no proper print available. Another thing --- needed is a default style in ourown theme style definition, as otherwise we get wrong --- nested lexers, especially if they are larger than a view. This is the hardest part of --- getting things right. --- --- Eventually it might be safer to copy the other methods from lexer.lua here as well so --- that we have no dependencies, apart from the c library (for which at some point the api --- will be stable I hope). --- --- It's a pitty that there is no scintillua library for the OSX version of scite. Even --- better would be to have the scintillua library as integral part of scite as that way I --- could use OSX alongside windows and linux (depending on needs). Also nice would be to --- have a proper interface to scite then because currently the lexer is rather isolated and the --- lua version does not provide all standard libraries. It would also be good to have lpeg --- support in the regular scite lua extension (currently you need to pick it up from someplace --- else). - -local lpeg = require 'lpeg' - -local R, P, S, C, V, Cp, Cs, Ct, Cmt, Cc, Cf, Cg, Carg = lpeg.R, lpeg.P, lpeg.S, lpeg.C, lpeg.V, lpeg.Cp, lpeg.Cs, lpeg.Ct, lpeg.Cmt, lpeg.Cc, lpeg.Cf, lpeg.Cg, lpeg.Carg -local lpegmatch = lpeg.match -local find, gmatch, match, lower, upper, gsub = string.find, string.gmatch, string.match, string.lower, string.upper, string.gsub -local concat = table.concat -local global = _G -local type, next, setmetatable, rawset = type, next, setmetatable, rawset - --- less confusing as we also use lexer for the current lexer and local _M = lexer is just ugly - -local lexers = lexer or { } -- + fallback for syntax check - --- ok, let's also move helpers here (todo: all go here) - -local sign = S("+-") -local digit = R("09") -local octdigit = R("07") -local hexdigit = R("09","AF","af") - -lexers.sign = sign -lexers.digit = digit -lexers.octdigit = octdigit -lexers.hexdigit = hexdigit -lexers.xdigit = hexdigit - -lexers.dec_num = digit^1 -lexers.oct_num = P("0") - * octdigit^1 -lexers.hex_num = P("0") * S("xX") - * (hexdigit^0 * '.' * hexdigit^1 + hexdigit^1 * '.' * hexdigit^0 + hexdigit^1) - * (S("pP") * sign^-1 * hexdigit^1)^-1 -lexers.float = sign^-1 - * (digit^0 * '.' * digit^1 + digit^1 * '.' * digit^0 + digit^1) - * S("eE") * sign^-1 * digit^1 - -lexers.dec_int = sign^-1 * lexers.dec_num -lexers.oct_int = sign^-1 * lexers.oct_num -lexers.hex_int = sign^-1 * lexers.hex_num - --- these helpers are set afterwards so we delay their initialization ... there is no need to alias --- each time again and this way we can more easily adapt to updates - -local get_style_at, get_indent_amount, get_property, get_fold_level, FOLD_BASE, FOLD_HEADER, FOLD_BLANK, initialize - -initialize = function() - FOLD_BASE = lexers.FOLD_BASE or SC_FOLDLEVELBASE - FOLD_HEADER = lexers.FOLD_HEADER or SC_FOLDLEVELHEADERFLAG - FOLD_BLANK = lexers.FOLD_BLANK or SC_FOLDLEVELWHITEFLAG - get_style_at = lexers.get_style_at or GetStyleAt - get_indent_amount = lexers.get_indent_amount or GetIndentAmount - get_property = lexers.get_property or GetProperty - get_fold_level = lexers.get_fold_level or GetFoldLevel - -- - initialize = nil -end - --- we create our own extra namespace for extensions and helpers - -lexers.context = lexers.context or { } -local context = lexers.context - -context.patterns = context.patterns or { } -local patterns = context.patterns - -lexers._CONTEXTEXTENSIONS = true - -local locations = { - -- lexers.context.path, - "data", -- optional data directory - "..", -- regular scite directory -} - -local function collect(name) --- local definitions = loadfile(name .. ".luc") or loadfile(name .. ".lua") - local okay, definitions = pcall(function () return require(name) end) - if okay then - if type(definitions) == "function" then - definitions = definitions() - end - if type(definitions) == "table" then - return definitions - end - end -end - -function context.loaddefinitions(name) - for i=1,#locations do - local data = collect(locations[i] .. "/" .. name) - if data then - return data - end - end -end - -function context.word_match(words,word_chars,case_insensitive) - local chars = '%w_' -- maybe just "" when word_chars - if word_chars then - chars = '^([' .. chars .. gsub(word_chars,'([%^%]%-])', '%%%1') ..']+)' - else - chars = '^([' .. chars ..']+)' - end - if case_insensitive then - local word_list = { } - for i=1,#words do - word_list[lower(words[i])] = true - end - return P(function(input, index) - local s, e, word = find(input,chars,index) - return word and word_list[lower(word)] and e + 1 or nil - end) - else - local word_list = { } - for i=1,#words do - word_list[words[i]] = true - end - return P(function(input, index) - local s, e, word = find(input,chars,index) - return word and word_list[word] and e + 1 or nil - end) - end -end - -local idtoken = R("az","AZ","\127\255","__") -local digit = R("09") -local sign = S("+-") -local period = P(".") -local space = S(" \n\r\t\f\v") - -patterns.idtoken = idtoken - -patterns.digit = digit -patterns.sign = sign -patterns.period = period - -patterns.cardinal = digit^1 -patterns.integer = sign^-1 * digit^1 - -patterns.real = - sign^-1 * ( -- at most one - digit^1 * period * digit^0 -- 10.0 10. - + digit^0 * period * digit^1 -- 0.10 .10 - + digit^1 -- 10 - ) - -patterns.restofline = (1-S("\n\r"))^1 -patterns.space = space -patterns.spacing = space^1 -patterns.nospacing = (1-space)^1 -patterns.anything = P(1) - -local endof = S("\n\r\f") - -patterns.startofline = P(function(input,index) - return (index == 1 or lpegmatch(endof,input,index-1)) and index -end) - -function context.exact_match(words,word_chars,case_insensitive) - local characters = concat(words) - local pattern -- the concat catches _ etc - if word_chars == true or word_chars == false or word_chars == nil then - word_chars = "" - end - if type(word_chars) == "string" then - pattern = S(characters) + idtoken - if case_insensitive then - pattern = pattern + S(upper(characters)) + S(lower(characters)) - end - if word_chars ~= "" then - pattern = pattern + S(word_chars) - end - elseif word_chars then - pattern = word_chars - end - if case_insensitive then - local list = { } - for i=1,#words do - list[lower(words[i])] = true - end - return Cmt(pattern^1, function(_,i,s) - return list[lower(s)] -- and i or nil - end) - else - local list = { } - for i=1,#words do - list[words[i]] = true - end - return Cmt(pattern^1, function(_,i,s) - return list[s] -- and i or nil - end) - end -end - --- spell checking (we can only load lua files) --- --- return { --- min = 3, --- max = 40, --- n = 12345, --- words = { --- ["someword"] = "someword", --- ["anotherword"] = "Anotherword", --- }, --- } - -local lists = { } - -function context.setwordlist(tag,limit) -- returns hash (lowercase keys and original values) - if not tag or tag == "" then - return false, 3 - end - local list = lists[tag] - if not list then - list = context.loaddefinitions("spell-" .. tag) - if not list or type(list) ~= "table" then - list = { words = false, min = 3 } - else - list.words = list.words or false - list.min = list.min or 3 - end - lists[tag] = list - end - return list.words, list.min -end - -patterns.wordtoken = R("az","AZ","\127\255") -patterns.wordpattern = patterns.wordtoken^3 -- todo: if limit and #s < limit then - -function context.checkedword(validwords,validminimum,s,i) -- ,limit - if not validwords then -- or #s < validminimum then - return true, "text", i -- true, "default", i - else - -- keys are lower - local word = validwords[s] - if word == s then - return true, "okay", i -- exact match - elseif word then - return true, "warning", i -- case issue - else - local word = validwords[lower(s)] - if word == s then - return true, "okay", i -- exact match - elseif word then - return true, "warning", i -- case issue - elseif upper(s) == s then - return true, "warning", i -- probably a logo or acronym - else - return true, "error", i - end - end - end -end - -function context.styleofword(validwords,validminimum,s) -- ,limit - if not validwords or #s < validminimum then - return "text" - else - -- keys are lower - local word = validwords[s] - if word == s then - return "okay" -- exact match - elseif word then - return "warning" -- case issue - else - local word = validwords[lower(s)] - if word == s then - return "okay" -- exact match - elseif word then - return "warning" -- case issue - elseif upper(s) == s then - return "warning" -- probably a logo or acronym - else - return "error" - end - end - end -end - --- overloaded functions - -local h_table, b_table, n_table = { }, { }, { } -- from the time small tables were used (optimization) - -setmetatable(h_table, { __index = function(t,level) local v = { level, FOLD_HEADER } t[level] = v return v end }) -setmetatable(b_table, { __index = function(t,level) local v = { level, FOLD_BLANK } t[level] = v return v end }) -setmetatable(n_table, { __index = function(t,level) local v = { level } t[level] = v return v end }) - -local newline = P("\r\n") + S("\r\n") -local p_yes = Cp() * Cs((1-newline)^1) * newline^-1 -local p_nop = newline - -local folders = { } - -local function fold_by_parsing(text,start_pos,start_line,start_level,lexer) - local folder = folders[lexer] - if not folder then - -- - local pattern, folds, text, start_pos, line_num, prev_level, current_level - -- - local fold_symbols = lexer._foldsymbols - local fold_pattern = lexer._foldpattern -- use lpeg instead (context extension) - -- - if fold_pattern then - -- if no functions are found then we could have a faster one - fold_pattern = Cp() * C(fold_pattern) / function(s,match) - local symbols = fold_symbols[get_style_at(start_pos + s)] - if symbols then - local l = symbols[match] - if l then - current_level = current_level + l - end - end - end - local action_y = function() - folds[line_num] = prev_level - if current_level > prev_level then - folds[line_num] = prev_level + FOLD_HEADER - end - if current_level < FOLD_BASE then - current_level = FOLD_BASE - end - prev_level = current_level - line_num = line_num + 1 - end - local action_n = function() - folds[line_num] = prev_level + FOLD_BLANK - line_num = line_num + 1 - end - pattern = ((fold_pattern + (1-newline))^1 * newline / action_y + newline/action_n)^0 - - else - -- the traditional one but a bit optimized - local fold_symbols_patterns = fold_symbols._patterns - local action_y = function(pos,line) - for j = 1, #fold_symbols_patterns do - for s, match in gmatch(line,fold_symbols_patterns[j]) do -- '()('..patterns[i]..')' - local symbols = fold_symbols[get_style_at(start_pos + pos + s - 1)] - local l = symbols and symbols[match] - local t = type(l) - if t == 'number' then - current_level = current_level + l - elseif t == 'function' then - current_level = current_level + l(text, pos, line, s, match) - end - end - end - folds[line_num] = prev_level - if current_level > prev_level then - folds[line_num] = prev_level + FOLD_HEADER - end - if current_level < FOLD_BASE then - current_level = FOLD_BASE - end - prev_level = current_level - line_num = line_num + 1 - end - local action_n = function() - folds[line_num] = prev_level + FOLD_BLANK - line_num = line_num + 1 - end - pattern = (p_yes/action_y + p_nop/action_n)^0 - end - -- - local reset_parser = lexer._reset_parser - -- - folder = function(_text_,_start_pos_,_start_line_,_start_level_) - if reset_parser then - reset_parser() - end - folds = { } - text = _text_ - start_pos = _start_pos_ - line_num = _start_line_ - prev_level = _start_level_ - current_level = prev_level - lpegmatch(pattern,text) - -- make folds collectable - local t = folds - folds = nil - return t - end - folders[lexer] = folder - end - return folder(text,start_pos,start_line,start_level,lexer) -end - -local folds, current_line, prev_level - -local function action_y() - local current_level = FOLD_BASE + get_indent_amount(current_line) - if current_level > prev_level then -- next level - local i = current_line - 1 - local f - while true do - f = folds[i] - if not f then - break - elseif f[2] == FOLD_BLANK then - i = i - 1 - else - f[2] = FOLD_HEADER -- low indent - break - end - end - folds[current_line] = { current_level } -- high indent - elseif current_level < prev_level then -- prev level - local f = folds[current_line - 1] - if f then - f[1] = prev_level -- high indent - end - folds[current_line] = { current_level } -- low indent - else -- same level - folds[current_line] = { prev_level } - end - prev_level = current_level - current_line = current_line + 1 -end - -local function action_n() - folds[current_line] = { prev_level, FOLD_BLANK } - current_line = current_line + 1 -end - -local pattern = ( S("\t ")^0 * ( (1-S("\n\r"))^1 / action_y + P(true) / action_n) * newline )^0 - -local function fold_by_indentation(text,start_pos,start_line,start_level) - -- initialize - folds = { } - current_line = start_line - prev_level = start_level - -- define - -- -- not here .. pattern binds and local functions are not frozen - -- analyze - lpegmatch(pattern,text) - -- flatten - for line, level in next, folds do - folds[line] = level[1] + (level[2] or 0) - end - -- done, make folds collectable - local t = folds - folds = nil - return t -end - -local function fold_by_line(text,start_pos,start_line,start_level) - local folds = { } - -- can also be lpeg'd - for _ in gmatch(text,".-\r?\n") do - folds[start_line] = n_table[start_level] -- { start_level } -- stile tables ? needs checking - start_line = start_line + 1 - end - return folds -end - -local threshold_by_lexer = 512 * 1024 -- we don't know the filesize yet -local threshold_by_parsing = 512 * 1024 -- we don't know the filesize yet -local threshold_by_indentation = 512 * 1024 -- we don't know the filesize yet -local threshold_by_line = 512 * 1024 -- we don't know the filesize yet - -function context.fold(text,start_pos,start_line,start_level) -- hm, we had size thresholds .. where did they go - if text == '' then - return { } - end - if initialize then - initialize() - end - local lexer = global._LEXER - local fold_by_lexer = lexer._fold - local fold_by_symbols = lexer._foldsymbols - local filesize = 0 -- we don't know that - if fold_by_lexer then - if filesize <= threshold_by_lexer then - return fold_by_lexer(text,start_pos,start_line,start_level,lexer) - end - elseif fold_by_symbols then -- and get_property('fold.by.parsing',1) > 0 then - if filesize <= threshold_by_parsing then - return fold_by_parsing(text,start_pos,start_line,start_level,lexer) - end - elseif get_property('fold.by.indentation',1) > 0 then - if filesize <= threshold_by_indentation then - return fold_by_indentation(text,start_pos,start_line,start_level,lexer) - end - elseif get_property('fold.by.line',1) > 0 then - if filesize <= threshold_by_line then - return fold_by_line(text,start_pos,start_line,start_level,lexer) - end - end - return { } -end - --- The following code is mostly unchanged: - -local function add_rule(lexer,id,rule) - if not lexer._RULES then - lexer._RULES = { } - lexer._RULEORDER = { } - end - lexer._RULES[id] = rule - lexer._RULEORDER[#lexer._RULEORDER + 1] = id -end - -local function add_style(lexer,token_name,style) - local len = lexer._STYLES.len - if len == 32 then - len = len + 8 - end - if len >= 128 then - print('Too many styles defined (128 MAX)') - end - lexer._TOKENS[token_name] = len - lexer._STYLES[len] = style - lexer._STYLES.len = len + 1 -end - -local function join_tokens(lexer) - local patterns = lexer._RULES - local order = lexer._RULEORDER - local token_rule = patterns[order[1]] - for i=2,#order do - token_rule = token_rule + patterns[order[i]] - end - lexer._TOKENRULE = token_rule - return token_rule -end - -local function add_lexer(grammar, lexer, token_rule) - local token_rule = join_tokens(lexer) - local lexer_name = lexer._NAME - local children = lexer._CHILDREN - for i=1,#children do - local child = children[i] - if child._CHILDREN then - add_lexer(grammar, child) - end - local child_name = child._NAME - local rules = child._EMBEDDEDRULES[lexer_name] - local rules_token_rule = grammar['__'..child_name] or rules.token_rule - grammar[child_name] = (-rules.end_rule * rules_token_rule)^0 * rules.end_rule^-1 * V(lexer_name) - local embedded_child = '_' .. child_name - grammar[embedded_child] = rules.start_rule * (-rules.end_rule * rules_token_rule)^0 * rules.end_rule^-1 - token_rule = V(embedded_child) + token_rule - end - grammar['__' .. lexer_name] = token_rule - grammar[lexer_name] = token_rule^0 -end - -local function build_grammar(lexer, initial_rule) - local children = lexer._CHILDREN - if children then - local lexer_name = lexer._NAME - if not initial_rule then - initial_rule = lexer_name - end - local grammar = { initial_rule } - add_lexer(grammar, lexer) - lexer._INITIALRULE = initial_rule - lexer._GRAMMAR = Ct(P(grammar)) - else - lexer._GRAMMAR = Ct(join_tokens(lexer)^0) - end -end - --- so far. We need these local functions in the next one. - -local lineparsers = { } - -function context.lex(text,init_style) - local lexer = global._LEXER - local grammar = lexer._GRAMMAR - if initialize then - initialize() - end - if not grammar then - return { } - elseif lexer._LEXBYLINE then -- we could keep token - local tokens = { } - local offset = 0 - local noftokens = 0 - local lineparser = lineparsers[lexer] - if not lineparser then -- probably a cmt is more efficient - lineparser = C((1-newline)^0 * newline) / function(line) - local length = #line - local line_tokens = length > 0 and lpegmatch(grammar,line) - if line_tokens then - for i=1,#line_tokens,2 do - noftokens = noftokens + 1 - tokens[noftokens] = line_tokens[i] - noftokens = noftokens + 1 - tokens[noftokens] = line_tokens[i + 1] + offset - end - end - offset = offset + length - if noftokens > 0 and tokens[noftokens] ~= offset then - noftokens = noftokens + 1 - tokens[noftokens] = 'default' - noftokens = noftokens + 1 - tokens[noftokens] = offset + 1 - end - end - lineparser = lineparser^0 - lineparsers[lexer] = lineparser - end - lpegmatch(lineparser,text) - return tokens - - elseif lexer._CHILDREN then - -- as we cannot print, tracing is not possible ... this might change as we can as well - -- generate them all in one go (sharing as much as possible) - local hash = lexer._HASH -- hm, was _hash - if not hash then - hash = { } - lexer._HASH = hash - end - grammar = hash[init_style] - if grammar then - lexer._GRAMMAR = grammar - else - for style, style_num in next, lexer._TOKENS do - if style_num == init_style then - -- the name of the lexers is filtered from the whitespace - -- specification - local lexer_name = match(style,'^(.+)_whitespace') or lexer._NAME - if lexer._INITIALRULE ~= lexer_name then - grammar = hash[lexer_name] - if not grammar then - build_grammar(lexer,lexer_name) - grammar = lexer._GRAMMAR - hash[lexer_name] = grammar - end - end - break - end - end - grammar = grammar or lexer._GRAMMAR - hash[init_style] = grammar - end - return lpegmatch(grammar,text) - else - return lpegmatch(grammar,text) - end -end - --- todo: keywords: one lookup and multiple matches - --- function context.token(name, patt) --- return Ct(patt * Cc(name) * Cp()) --- end --- --- -- hm, changed in 3.24 .. no longer a table - -function context.token(name, patt) - return patt * Cc(name) * Cp() -end - -lexers.fold = context.fold -lexers.lex = context.lex -lexers.token = context.token -lexers.exact_match = context.exact_match - --- helper .. alas ... the lexer's lua instance is rather crippled .. not even --- math is part of it - -local floor = math and math.floor -local char = string.char - -if not floor then - - floor = function(n) - return tonumber(string.format("%d",n)) - end - - math = math or { } - - math.floor = floor - -end - -local function utfchar(n) - if n < 0x80 then - return char(n) - elseif n < 0x800 then - return char( - 0xC0 + floor(n/0x40), - 0x80 + (n % 0x40) - ) - elseif n < 0x10000 then - return char( - 0xE0 + floor(n/0x1000), - 0x80 + (floor(n/0x40) % 0x40), - 0x80 + (n % 0x40) - ) - elseif n < 0x40000 then - return char( - 0xF0 + floor(n/0x40000), - 0x80 + floor(n/0x1000), - 0x80 + (floor(n/0x40) % 0x40), - 0x80 + (n % 0x40) - ) - else - -- return char( - -- 0xF1 + floor(n/0x1000000), - -- 0x80 + floor(n/0x40000), - -- 0x80 + floor(n/0x1000), - -- 0x80 + (floor(n/0x40) % 0x40), - -- 0x80 + (n % 0x40) - -- ) - return "?" - end -end - -context.utfchar = utfchar - --- a helper from l-lpeg: - -local gmatch = string.gmatch - -local function make(t) - local p - for k, v in next, t do - if not p then - if next(v) then - p = P(k) * make(v) - else - p = P(k) - end - else - if next(v) then - p = p + P(k) * make(v) - else - p = p + P(k) - end - end - end - return p -end - -function lpeg.utfchartabletopattern(list) - local tree = { } - for i=1,#list do - local t = tree - for c in gmatch(list[i],".") do - if not t[c] then - t[c] = { } - end - t = t[c] - end - end - return make(tree) -end - -patterns.invisibles = lpeg.utfchartabletopattern { - utfchar(0x00A0), -- nbsp - utfchar(0x2000), -- enquad - utfchar(0x2001), -- emquad - utfchar(0x2002), -- enspace - utfchar(0x2003), -- emspace - utfchar(0x2004), -- threeperemspace - utfchar(0x2005), -- fourperemspace - utfchar(0x2006), -- sixperemspace - utfchar(0x2007), -- figurespace - utfchar(0x2008), -- punctuationspace - utfchar(0x2009), -- breakablethinspace - utfchar(0x200A), -- hairspace - utfchar(0x200B), -- zerowidthspace - utfchar(0x202F), -- narrownobreakspace - utfchar(0x205F), -- math thinspace -} - --- now we can make: - -patterns.iwordtoken = patterns.wordtoken - patterns.invisibles -patterns.iwordpattern = patterns.iwordtoken^3 - --- require("themes/scite-context-theme") - --- In order to deal with some bug in additional styles (I have no cue what is --- wrong, but additional styles get ignored and clash somehow) I just copy the --- original lexer code ... see original for comments. - -return lexers diff --git a/context/data/scite/lexers/themes/scite-context-theme-keep.lua b/context/data/scite/lexers/themes/scite-context-theme-keep.lua deleted file mode 100644 index 7f9423d9a..000000000 --- a/context/data/scite/lexers/themes/scite-context-theme-keep.lua +++ /dev/null @@ -1,233 +0,0 @@ -local info = { - version = 1.002, - comment = "theme for scintilla lpeg lexer for context/metafun", - author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", - copyright = "PRAGMA ADE / ConTeXt Development Team", - license = "see context related readme files", -} - --- context_path = string.split(os.resultof("mtxrun --find-file context.mkiv"))[1] or "" --- global.trace("OEPS") -- how do we get access to the regular lua extensions - --- The regular styles set the main lexer styles table but we avoid that in order not --- to end up with updating issues. We just use another table. - --- if not lexer._CONTEXTEXTENSIONS then require("scite-context-lexer") end - -local context_path = "t:/sources" -- c:/data/tex-context/tex/texmf-context/tex/base -local font_name = 'Dejavu Sans Mono' -local font_size = 14 - -if not WIN32 then - font_name = '!' .. font_name -end - -local color = lexer.color -local style = lexer.style - -lexer.context = lexer.context or { } -local context = lexer.context - -context.path = context_path - -colors = { - red = color('7F', '00', '00'), - green = color('00', '7F', '00'), - blue = color('00', '00', '7F'), - cyan = color('00', '7F', '7F'), - magenta = color('7F', '00', '7F'), - yellow = color('7F', '7F', '00'), - orange = color('B0', '7F', '00'), - -- - white = color('FF', 'FF', 'FF'), - light = color('CF', 'CF', 'CF'), - grey = color('80', '80', '80'), - dark = color('4F', '4F', '4F'), - black = color('00', '00', '00'), - -- - selection = color('F7', 'F7', 'F7'), - logpanel = color('E7', 'E7', 'E7'), - textpanel = color('CF', 'CF', 'CF'), - linepanel = color('A7', 'A7', 'A7'), - tippanel = color('44', '44', '44'), - -- - right = color('00', '00', 'FF'), - wrong = color('FF', '00', '00'), -} - -colors.teal = colors.cyan -colors.purple = colors.magenta - --- to be set: --- --- style_nothing --- style_class --- style_comment --- style_constant --- style_definition --- style_error --- style_function --- style_keyword --- style_number --- style_operator --- style_string --- style_preproc --- style_tag --- style_type --- style_variable --- style_embedded --- style_label --- style_regex --- style_identifier --- --- style_line_number --- style_bracelight --- style_bracebad --- style_controlchar --- style_indentguide --- style_calltip - -style_default = style { - font = font_name, - size = font_size, - fore = colors.black, - back = colors.textpanel, -} - -style_nothing = style { - -- empty -} - -style_number = style { fore = colors.cyan } -style_comment = style { fore = colors.yellow } -style_string = style { fore = colors.magenta } -style_keyword = style { fore = colors.blue, bold = true } - -style_quote = style { fore = colors.blue, bold = true } -style_special = style { fore = colors.blue } -style_extra = style { fore = colors.yellow } - -style_embedded = style { fore = colors.black, bold = true } - -style_char = style { fore = colors.magenta } -style_reserved = style { fore = colors.magenta, bold = true } -style_class = style { fore = colors.black, bold = true } -style_constant = style { fore = colors.cyan, bold = true } -style_definition = style { fore = colors.black, bold = true } -style_okay = style { fore = colors.dark } -style_error = style { fore = colors.red } -style_warning = style { fore = colors.orange } -style_invisible = style { back = colors.orange } -style_function = style { fore = colors.black, bold = true } -style_operator = style { fore = colors.blue } -style_preproc = style { fore = colors.yellow, bold = true } -style_tag = style { fore = colors.cyan } -style_type = style { fore = colors.blue } -style_variable = style { fore = colors.black } -style_identifier = style_nothing - -style_standout = style { fore = colors.orange, bold = true } - -style_line_number = style { back = colors.linepanel } -style_bracelight = style_standout -style_bracebad = style_standout -style_indentguide = style { fore = colors.linepanel, back = colors.white } -style_calltip = style { fore = colors.white, back = colors.tippanel } -style_controlchar = style_nothing - -style_label = style { fore = colors.red, bold = true } -- style { fore = colors.cyan, bold = true } -style_regex = style_string - -style_command = style { fore = colors.green, bold = true } - --- only bold seems to work - -lexer.style_nothing = style_nothing -lexer.style_class = style_class -lexer.style_comment = style_comment -lexer.style_constant = style_constant -lexer.style_definition = style_definition -lexer.style_error = style_error -lexer.style_function = style_function -lexer.style_keyword = style_keyword -lexer.style_number = style_number -lexer.style_operator = style_operator -lexer.style_string = style_string -lexer.style_preproc = style_preproc -lexer.style_tag = style_tag -lexer.style_type = style_type -lexer.style_variable = style_variable -lexer.style_embedded = style_embedded -lexer.style_label = style_label -lexer.style_regex = style_regex -lexer.style_identifier = style_nothing - -local styles = { -- as we have globals we could do with less - - -- ["whitespace"] = style_whitespace, -- not to be set! - -["default"] = style_nothing, -["number"] = style_number, -["comment"] = style_comment, -["keyword"] = style_keyword, -["string"] = style_string, -["preproc"] = style_preproc, - - ["reserved"] = style_reserved, - ["internal"] = style_standout, - - ["command"] = style_command, - ["preamble"] = style_comment, - ["embedded"] = style_embedded, - ["grouping"] = style { fore = colors.red }, -["label"] = style_label, - ["primitive"] = style_keyword, - ["plain"] = style { fore = colors.dark, bold = true }, - ["user"] = style { fore = colors.green }, - ["data"] = style_constant, - ["special"] = style_special, - ["extra"] = style_extra, - ["quote"] = style_quote, - - ["okay"] = style_okay, - ["warning"] = style_warning, - ["invisible"] = style_invisible, -["error"] = style_error, - -} - --- Old method (still available): - -local styleset = { } - -for k, v in next, styles do - styleset[#styleset+1] = { k, v } -end - -context.styles = styles -context.styleset = styleset - --- We need to be sparse due to some limitation (and the number of built in styles --- growing). - --- function context.newstyleset(list) --- local t = { } --- if list then --- for i=1,#list do --- t[list[i]] = true --- end --- end --- return t --- end - --- function context.usestyle(set,name) --- set[name] = true --- return name --- end - --- function context.usestyleset(set) --- local t = { } --- for k, _ in next, set do --- t[#t+1] = { k, styles[k] or styles.default } --- end --- end diff --git a/context/data/scite/lexers/themes/scite-context-theme.lua b/context/data/scite/lexers/themes/scite-context-theme.lua deleted file mode 100644 index 6e161b22f..000000000 --- a/context/data/scite/lexers/themes/scite-context-theme.lua +++ /dev/null @@ -1,226 +0,0 @@ -local info = { - version = 1.002, - comment = "theme for scintilla lpeg lexer for context/metafun", - author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", - copyright = "PRAGMA ADE / ConTeXt Development Team", - license = "see context related readme files", -} - --- context_path = string.split(os.resultof("mtxrun --find-file context.mkiv"))[1] or "" --- global.trace("OEPS") -- how do we get access to the regular lua extensions - --- The regular styles set the main lexer styles table but we avoid that in order not --- to end up with updating issues. We just use another table. - -if not lexer._CONTEXTEXTENSIONS then require("scite-context-lexer") end - -local context_path = "t:/sources" -- c:/data/tex-context/tex/texmf-context/tex/base -local font_name = 'Dejavu Sans Mono' -local font_size = 14 - -if not WIN32 then - font_name = '!' .. font_name -end - -local color = lexer.color -local style = lexer.style - -lexer.context = lexer.context or { } -local context = lexer.context - -context.path = context_path - -local colors = { - red = color('7F', '00', '00'), - green = color('00', '7F', '00'), - blue = color('00', '00', '7F'), - cyan = color('00', '7F', '7F'), - magenta = color('7F', '00', '7F'), - yellow = color('7F', '7F', '00'), - orange = color('B0', '7F', '00'), - -- - white = color('FF', 'FF', 'FF'), - light = color('CF', 'CF', 'CF'), - grey = color('80', '80', '80'), - dark = color('4F', '4F', '4F'), - black = color('00', '00', '00'), - -- - selection = color('F7', 'F7', 'F7'), - logpanel = color('E7', 'E7', 'E7'), - textpanel = color('CF', 'CF', 'CF'), - linepanel = color('A7', 'A7', 'A7'), - tippanel = color('44', '44', '44'), - -- - right = color('00', '00', 'FF'), - wrong = color('FF', '00', '00'), -} - -colors.teal = colors.cyan -colors.purple = colors.magenta - -lexer.colors = colors - --- defaults: - -local style_nothing = style { } ------ style_whitespace = style { } -local style_comment = style { fore = colors.yellow } -local style_string = style { fore = colors.magenta } -local style_number = style { fore = colors.cyan } -local style_keyword = style { fore = colors.blue, bold = true } -local style_identifier = style_nothing -local style_operator = style { fore = colors.blue } -local style_error = style { fore = colors.red } -local style_preproc = style { fore = colors.yellow, bold = true } -local style_constant = style { fore = colors.cyan, bold = true } -local style_variable = style { fore = colors.black } -local style_function = style { fore = colors.black, bold = true } -local style_class = style { fore = colors.black, bold = true } -local style_type = style { fore = colors.blue } -local style_label = style { fore = colors.red, bold = true } -local style_regex = style { fore = colors.magenta } - --- reserved: - -local style_default = style { font = font_name, size = font_size, fore = colors.black, back = colors.textpanel } -local style_text = style { font = font_name, size = font_size, fore = colors.black, back = colors.textpanel } -local style_line_number = style { back = colors.linepanel } -local style_bracelight = style { fore = colors.orange, bold = true } -local style_bracebad = style { fore = colors.orange, bold = true } -local style_indentguide = style { fore = colors.linepanel, back = colors.white } -local style_calltip = style { fore = colors.white, back = colors.tippanel } -local style_controlchar = style_nothing - --- extras: - -local style_quote = style { fore = colors.blue, bold = true } -local style_special = style { fore = colors.blue } -local style_extra = style { fore = colors.yellow } -local style_embedded = style { fore = colors.black, bold = true } ------ style_char = style { fore = colors.magenta } -local style_reserved = style { fore = colors.magenta, bold = true } -local style_definition = style { fore = colors.black, bold = true } -local style_okay = style { fore = colors.dark } -local style_warning = style { fore = colors.orange } -local style_invisible = style { back = colors.orange } -local style_tag = style { fore = colors.cyan } ------ style_standout = style { fore = colors.orange, bold = true } -local style_command = style { fore = colors.green, bold = true } -local style_internal = style { fore = colors.orange, bold = true } - -local style_preamble = style { fore = colors.yellow } -local style_grouping = style { fore = colors.red } -local style_primitive = style { fore = colors.blue, bold = true } -local style_plain = style { fore = colors.dark, bold = true } -local style_user = style { fore = colors.green } -local style_data = style { fore = colors.cyan, bold = true } - - --- used by the generic lexer: - -lexer.style_nothing = style_nothing -- 0 ------.whitespace = style_whitespace -- 1 -lexer.style_comment = style_comment -- 2 -lexer.style_string = style_string -- 3 -lexer.style_number = style_number -- 4 -lexer.style_keyword = style_keyword -- 5 -lexer.style_identifier = style_nothing -- 6 -lexer.style_operator = style_operator -- 7 -lexer.style_error = style_error -- 8 -lexer.style_preproc = style_preproc -- 9 -lexer.style_constant = style_constant -- 10 -lexer.style_variable = style_variable -- 11 -lexer.style_function = style_function -- 12 -lexer.style_class = style_class -- 13 -lexer.style_type = style_type -- 14 -lexer.style_label = style_label -- 15 -lexer.style_regex = style_regexp -- 16 - -lexer.style_default = style_default -- 32 -lexer.style_line_number = style_line_number -- 33 -lexer.style_bracelight = style_bracelight -- 34 -lexer.style_bracebad = style_bracebad -- 35 -lexer.style_indentguide = style_indentguide -- 36 -lexer.style_calltip = style_calltip -- 37 -lexer.style_controlchar = style_controlchar -- 38 - -local styles = { -- as we have globals we could do with less - - -- ["whitespace"] = style_whitespace, -- not to be set! - ["default"] = style_nothing, -- else no good backtracking to start-of-child - -- ["number"] = style_number, - -- ["comment"] = style_comment, - -- ["keyword"] = style_keyword, - -- ["string"] = style_string, - -- ["preproc"] = style_preproc, - -- ["error"] = style_error, - -- ["label"] = style_label, - - ["invisible"] = style_invisible, - ["quote"] = style_quote, - ["special"] = style_special, - ["extra"] = style_extra, - ["embedded"] = style_embedded, - -- ["char"] = style_char, - ["reserved"] = style_reserved, - -- ["definition"] = style_definition, - ["okay"] = style_okay, - ["warning"] = style_warning, - -- ["standout"] = style_standout, - ["command"] = style_command, - ["internal"] = style_internal, - ["preamble"] = style_preamble, - ["grouping"] = style_grouping, - ["primitive"] = style_primitive, - ["plain"] = style_plain, - ["user"] = style_user, - ["data"] = style_data, - - ["text"] = style_text, -- style_default - -} - -local styleset = { } - -for k, v in next, styles do - styleset[#styleset+1] = { k, v } -end - -context.styles = styles -context.styleset = styleset - -function context.stylesetcopy() - local t = { } - for i=1,#styleset do - local s = styleset[i] - t[i] = s -t[s[1]] = t[s[2]] -- new style ? - end - t[#t+1] = { "whitespace", style_nothing } -t.whitespace = style_nothing -- new style ? - return t -end - --- We can be sparse if needed: - --- function context.newstyleset(list) --- local t = { } --- if list then --- for i=1,#list do --- t[list[i]] = true --- end --- end --- return t --- end - --- function context.usestyle(set,name) --- set[name] = true --- return name --- end - --- function context.usestyleset(set) --- local t = { } --- for k, _ in next, set do --- t[#t+1] = { k, styles[k] or styles.default } --- end --- end diff --git a/context/data/scite/metapost.properties b/context/data/scite/metapost.properties deleted file mode 100644 index fe89b65eb..000000000 --- a/context/data/scite/metapost.properties +++ /dev/null @@ -1 +0,0 @@ -import scite-metapost diff --git a/context/data/scite/scite-context-data-context.properties b/context/data/scite/scite-context-data-context.properties deleted file mode 100644 index fbd958f8a..000000000 --- a/context/data/scite/scite-context-data-context.properties +++ /dev/null @@ -1,191 +0,0 @@ -keywordclass.context.constants=\ -zerocount minusone minustwo plusone \ -plustwo plusthree plusfour plusfive plussix \ -plusseven pluseight plusnine plusten plussixteen \ -plushundred plusthousand plustenthousand plustwentythousand medcard \ -maxcard zeropoint onepoint halfapoint onebasepoint \ -maxdimen scaledpoint thousandpoint points halfpoint \ -zeroskip zeromuskip onemuskip pluscxxvii pluscxxviii \ -pluscclv pluscclvi normalpagebox endoflinetoken outputnewlinechar \ -emptytoks empty undefined voidbox emptybox \ -emptyvbox emptyhbox bigskipamount medskipamount smallskipamount \ -fmtname fmtversion texengine texenginename texengineversion \ -luatexengine pdftexengine xetexengine unknownengine etexversion \ -pdftexversion xetexversion xetexrevision activecatcode bgroup \ -egroup endline conditionaltrue conditionalfalse attributeunsetvalue \ -uprotationangle rightrotationangle downrotationangle leftrotationangle inicatcodes \ -ctxcatcodes texcatcodes notcatcodes txtcatcodes vrbcatcodes \ -prtcatcodes nilcatcodes luacatcodes tpacatcodes tpbcatcodes \ -xmlcatcodes escapecatcode begingroupcatcode endgroupcatcode mathshiftcatcode \ -alignmentcatcode endoflinecatcode parametercatcode superscriptcatcode subscriptcatcode \ -ignorecatcode spacecatcode lettercatcode othercatcode activecatcode \ -commentcatcode invalidcatcode tabasciicode newlineasciicode formfeedasciicode \ -endoflineasciicode endoffileasciicode spaceasciicode hashasciicode dollarasciicode \ -commentasciicode ampersandasciicode colonasciicode backslashasciicode circumflexasciicode \ -underscoreasciicode leftbraceasciicode barasciicode rightbraceasciicode tildeasciicode \ -delasciicode lessthanasciicode morethanasciicode doublecommentsignal atsignasciicode \ -exclamationmarkasciicode questionmarkasciicode doublequoteasciicode singlequoteasciicode forwardslashasciicode \ -primeasciicode activemathcharcode activetabtoken activeformfeedtoken activeendoflinetoken \ -batchmodecode nonstopmodecode scrollmodecode errorstopmodecode bottomlevelgroupcode \ -simplegroupcode hboxgroupcode adjustedhboxgroupcode vboxgroupcode vtopgroupcode \ -aligngroupcode noaligngroupcode outputgroupcode mathgroupcode discretionarygroupcode \ -insertgroupcode vcentergroupcode mathchoicegroupcode semisimplegroupcode mathshiftgroupcode \ -mathleftgroupcode vadjustgroupcode charnodecode hlistnodecode vlistnodecode \ -rulenodecode insertnodecode marknodecode adjustnodecode ligaturenodecode \ -discretionarynodecode whatsitnodecode mathnodecode gluenodecode kernnodecode \ -penaltynodecode unsetnodecode mathsnodecode charifcode catifcode \ -numifcode dimifcode oddifcode vmodeifcode hmodeifcode \ -mmodeifcode innerifcode voidifcode hboxifcode vboxifcode \ -xifcode eofifcode trueifcode falseifcode caseifcode \ -definedifcode csnameifcode fontcharifcode fontslantperpoint fontinterwordspace \ -fontinterwordstretch fontinterwordshrink fontexheight fontemwidth fontextraspace \ -slantperpoint interwordspace interwordstretch interwordshrink exheight \ -emwidth extraspace mathsupdisplay mathsupnormal mathsupcramped \ -mathsubnormal mathsubcombined mathaxisheight startmode stopmode \ -startnotmode stopnotmode startmodeset stopmodeset doifmode \ -doifmodeelse doifnotmode startallmodes stopallmodes startnotallmodes \ -stopnotallmodes doifallmodes doifallmodeselse doifnotallmodes startenvironment \ -stopenvironment environment startcomponent stopcomponent component \ -startproduct stopproduct product startproject stopproject \ -project starttext stoptext startnotext stopnotext \ -startdocument stopdocument documentvariable setupdocument startmodule \ -stopmodule usemodule usetexmodule useluamodule setupmodule \ -currentmoduleparameter moduleparameter startTEXpage stopTEXpage enablemode \ -disablemode preventmode globalenablemode globaldisablemode globalpreventmode \ -pushmode popmode typescriptone typescripttwo typescriptthree \ -mathsizesuffix mathordcode mathopcode mathbincode mathrelcode \ -mathopencode mathclosecode mathpunctcode mathalphacode mathinnercode \ -mathnothingcode mathlimopcode mathnolopcode mathboxcode mathchoicecode \ -mathaccentcode mathradicalcode constantnumber constantnumberargument constantdimen \ -constantdimenargument constantemptyargument continueifinputfile luastringsep !!bs \ -!!es lefttorightmark righttoleftmark breakablethinspace nobreakspace \ -narrownobreakspace zerowidthnobreakspace ideographicspace ideographichalffillspace twoperemspace \ -threeperemspace fourperemspace fiveperemspace sixperemspace figurespace \ -punctuationspace hairspace zerowidthspace zerowidthnonjoiner zerowidthjoiner \ -zwnj zwj - -keywordclass.context.helpers=\ -startsetups stopsetups startxmlsetups stopxmlsetups \ -startluasetups stopluasetups starttexsetups stoptexsetups startrawsetups \ -stoprawsetups startlocalsetups stoplocalsetups starttexdefinition stoptexdefinition \ -starttexcode stoptexcode startcontextcode stopcontextcode doifsetupselse \ -doifsetups doifnotsetups setup setups texsetup \ -xmlsetup luasetup directsetup doifelsecommandhandler doifnotcommandhandler \ -doifcommandhandler newmode setmode resetmode newsystemmode \ -setsystemmode resetsystemmode pushsystemmode popsystemmode booleanmodevalue \ -newcount newdimen newskip newmuskip newbox \ -newtoks newread newwrite newmarks newinsert \ -newattribute newif newlanguage newfamily newfam \ -newhelp then begcsname strippedcsname firstargumentfalse \ -firstargumenttrue secondargumentfalse secondargumenttrue thirdargumentfalse thirdargumenttrue \ -fourthargumentfalse fourthargumenttrue fifthargumentfalse fifthsargumenttrue sixthargumentfalse \ -sixtsargumenttrue doglobal dodoglobal redoglobal resetglobal \ -donothing dontcomplain forgetall donetrue donefalse \ -htdp unvoidbox hfilll vfilll mathbox \ -mathlimop mathnolop mathnothing mathalpha currentcatcodetable \ -defaultcatcodetable catcodetablename newcatcodetable startcatcodetable stopcatcodetable \ -startextendcatcodetable stopextendcatcodetable pushcatcodetable popcatcodetable restorecatcodes \ -setcatcodetable letcatcodecommand defcatcodecommand uedcatcodecommand hglue \ -vglue hfillneg vfillneg hfilllneg vfilllneg \ -ruledhss ruledhfil ruledhfill ruledhfilneg ruledhfillneg \ -normalhfillneg ruledvss ruledvfil ruledvfill ruledvfilneg \ -ruledvfillneg normalvfillneg ruledhbox ruledvbox ruledvtop \ -ruledvcenter ruledmbox ruledhskip ruledvskip ruledkern \ -ruledmskip ruledmkern ruledhglue ruledvglue normalhglue \ -normalvglue ruledpenalty filledhboxb filledhboxr filledhboxg \ -filledhboxc filledhboxm filledhboxy filledhboxk scratchcounter \ -globalscratchcounter scratchdimen globalscratchdimen scratchskip globalscratchskip \ -scratchmuskip globalscratchmuskip scratchtoks globalscratchtoks scratchbox \ -globalscratchbox normalbaselineskip normallineskip normallineskiplimit availablehsize \ -localhsize setlocalhsize nextbox dowithnextbox dowithnextboxcs \ -dowithnextboxcontent dowithnextboxcontentcs scratchwidth scratchheight scratchdepth \ -scratchoffset scratchdistance scratchhsize scratchvsize scratchxoffset \ -scratchyoffset scratchhoffset scratchvoffset scratchxposition scratchyposition \ -scratchtopoffset scratchbottomoffset scratchleftoffset scratchrightoffset scratchcounterone \ -scratchcountertwo scratchcounterthree scratchdimenone scratchdimentwo scratchdimenthree \ -scratchskipone scratchskiptwo scratchskipthree scratchmuskipone scratchmuskiptwo \ -scratchmuskipthree scratchtoksone scratchtokstwo scratchtoksthree scratchboxone \ -scratchboxtwo scratchboxthree scratchnx scratchny scratchmx \ -scratchmy scratchunicode scratchleftskip scratchrightskip scratchtopskip \ -scratchbottomskip doif doifnot doifelse doifinset \ -doifnotinset doifinsetelse doifnextcharelse doifnextoptionalelse doifnextbgroupelse \ -doifnextparenthesiselse doiffastoptionalcheckelse doifundefinedelse doifdefinedelse doifundefined \ -doifdefined doifelsevalue doifvalue doifnotvalue doifnothing \ -doifsomething doifelsenothing doifsomethingelse doifvaluenothing doifvaluesomething \ -doifelsevaluenothing doifdimensionelse doifnumberelse doifnumber doifnotnumber \ -doifcommonelse doifcommon doifnotcommon doifinstring doifnotinstring \ -doifinstringelse doifassignmentelse docheckassignment tracingall tracingnone \ -loggingall removetoks appendtoks prependtoks appendtotoks \ -prependtotoks to endgraf endpar everyendpar \ -reseteverypar finishpar empty null space \ -quad enspace obeyspaces obeylines obeyedspace \ -obeyedline normalspace executeifdefined singleexpandafter doubleexpandafter \ -tripleexpandafter dontleavehmode removelastspace removeunwantedspaces keepunwantedspaces \ -wait writestatus define defineexpandable redefine \ -setmeasure setemeasure setgmeasure setxmeasure definemeasure \ -freezemeasure measure measured installcorenamespace getvalue \ -getuvalue setvalue setevalue setgvalue setxvalue \ -letvalue letgvalue resetvalue undefinevalue ignorevalue \ -setuvalue setuevalue setugvalue setuxvalue globallet \ -glet udef ugdef uedef uxdef \ -checked unique getparameters geteparameters getgparameters \ -getxparameters forgetparameters copyparameters getdummyparameters dummyparameter \ -directdummyparameter setdummyparameter letdummyparameter usedummystyleandcolor usedummystyleparameter \ -usedummycolorparameter processcommalist processcommacommand quitcommalist quitprevcommalist \ -processaction processallactions processfirstactioninset processallactionsinset unexpanded \ -expanded startexpanded stopexpanded protected protect \ -unprotect firstofoneargument firstoftwoarguments secondoftwoarguments firstofthreearguments \ -secondofthreearguments thirdofthreearguments firstoffourarguments secondoffourarguments thirdoffourarguments \ -fourthoffourarguments firstoffivearguments secondoffivearguments thirdoffivearguments fourthoffivearguments \ -fifthoffivearguments firstofsixarguments secondofsixarguments thirdofsixarguments fourthofsixarguments \ -fifthofsixarguments sixthofsixarguments firstofoneunexpanded gobbleoneargument gobbletwoarguments \ -gobblethreearguments gobblefourarguments gobblefivearguments gobblesixarguments gobblesevenarguments \ -gobbleeightarguments gobbleninearguments gobbletenarguments gobbleoneoptional gobbletwooptionals \ -gobblethreeoptionals gobblefouroptionals gobblefiveoptionals dorecurse doloop \ -exitloop dostepwiserecurse recurselevel recursedepth dofastloopcs \ -dowith newconstant setnewconstant setconstant setconstantvalue \ -newconditional settrue setfalse settruevalue setfalsevalue \ -newmacro setnewmacro newfraction newsignal dosingleempty \ -dodoubleempty dotripleempty doquadrupleempty doquintupleempty dosixtupleempty \ -doseventupleempty dosingleargument dodoubleargument dotripleargument doquadrupleargument \ -doquintupleargument dosixtupleargument doseventupleargument dosinglegroupempty dodoublegroupempty \ -dotriplegroupempty doquadruplegroupempty doquintuplegroupempty permitspacesbetweengroups dontpermitspacesbetweengroups \ -nopdfcompression maximumpdfcompression normalpdfcompression modulonumber dividenumber \ -getfirstcharacter doiffirstcharelse startnointerference stopnointerference twodigits \ -threedigits leftorright strut setstrut strutbox \ -strutht strutdp strutwd struthtdp begstrut \ -endstrut lineheight ordordspacing ordopspacing ordbinspacing \ -ordrelspacing ordopenspacing ordclosespacing ordpunctspacing ordinnerspacing \ -opordspacing opopspacing opbinspacing oprelspacing opopenspacing \ -opclosespacing oppunctspacing opinnerspacing binordspacing binopspacing \ -binbinspacing binrelspacing binopenspacing binclosespacing binpunctspacing \ -bininnerspacing relordspacing relopspacing relbinspacing relrelspacing \ -relopenspacing relclosespacing relpunctspacing relinnerspacing openordspacing \ -openopspacing openbinspacing openrelspacing openopenspacing openclosespacing \ -openpunctspacing openinnerspacing closeordspacing closeopspacing closebinspacing \ -closerelspacing closeopenspacing closeclosespacing closepunctspacing closeinnerspacing \ -punctordspacing punctopspacing punctbinspacing punctrelspacing punctopenspacing \ -punctclosespacing punctpunctspacing punctinnerspacing innerordspacing inneropspacing \ -innerbinspacing innerrelspacing inneropenspacing innerclosespacing innerpunctspacing \ -innerinnerspacing normalreqno startimath stopimath normalstartimath \ -normalstopimath startdmath stopdmath normalstartdmath normalstopdmath \ -uncramped cramped triggermathstyle mathstylefont mathsmallstylefont \ -mathstyleface mathsmallstyleface mathstylecommand mathpalette mathstylehbox \ -mathstylevbox mathstylevcenter mathstylevcenteredhbox mathstylevcenteredvbox mathtext \ -setmathsmalltextbox setmathtextbox triggerdisplaystyle triggertextstyle triggerscriptstyle \ -triggerscriptscriptstyle triggeruncrampedstyle triggercrampedstyle triggersmallstyle triggeruncrampedsmallstyle \ -triggercrampedsmallstyle triggerbigstyle triggeruncrampedbigstyle triggercrampedbigstyle luaexpr \ -expdoifelse expdoif expdoifnot expdoifcommonelse expdoifinsetelse \ -ctxdirectlua ctxlatelua ctxsprint ctxwrite ctxcommand \ -ctxdirectcommand ctxlatecommand ctxreport ctxlua luacode \ -lateluacode directluacode registerctxluafile ctxloadluafile luaversion \ -luamajorversion luaminorversion ctxluacode luaconditional luaexpanded \ -startluaparameterset stopluaparameterset luaparameterset definenamedlua obeylualines \ -obeyluatokens startluacode stopluacode startlua stoplua \ -carryoverpar assumelongusagecs Umathbotaccent righttolefthbox lefttorighthbox \ -righttoleftvbox lefttorightvbox righttoleftvtop lefttorightvtop rtlhbox \ -ltrhbox rtlvbox ltrvbox rtlvtop ltrvtop \ -autodirhbox autodirvbox autodirvtop lefttoright righttoleft \ -synchronizelayoutdirection synchronizedisplaydirection synchronizeinlinedirection lesshyphens morehyphens \ -nohyphens dohyphens Ucheckedstartdisplaymath Ucheckedstopdisplaymath - diff --git a/context/data/scite/scite-context-user.properties b/context/data/scite/scite-context-user.properties deleted file mode 100644 index 88e803031..000000000 --- a/context/data/scite/scite-context-user.properties +++ /dev/null @@ -1,15 +0,0 @@ -# this loades the basics - -import scite-context - -# internal lexing - -import scite-context-internal - -# external lexing (tex, mps, cld/lua, xml) - -import scite-context-external - -# this does some tuning - -import scite-pragma diff --git a/context/data/scite/scite-context-visual.tex b/context/data/scite/scite-context-visual.tex deleted file mode 100644 index 0a1b8bb71..000000000 --- a/context/data/scite/scite-context-visual.tex +++ /dev/null @@ -1,52 +0,0 @@ -% language=uk - -\usemodule[art-01] - -\defineframedtext - [entry] - -\starttext - -\startchapter[title=Some fancy title] - - \startluacode - local entries = { -- there can be more - { text = "The third entry!" }, - { text = "The fourth entry!" }, - } - - for i=1,#entries do - context.startentry() - context(entries[i].text) - context.stopentry() - end - \stopluacode - - This is just some text to demonstrate the realtime spellchecker - in combination with the embedded lua and metapost lexers and - inline as well as display \ctxlua{context("lua code")}. - - Non breakable spaces in for instance 10 mm and quads like here - are shown as well. - - \startlinecorrection - \startMPcode - for i=1 upto 100 : - draw fullcircle scaled (i*mm) ; - endfor ; - \stopMPcode - \stoplinecorrection - - \iftrue - \def\crap{some text} % who cares - \else - \def\crap{some crap} % about this - \fi - - \blank[2*big] - - \crap - -\stopchapter - -\stoptext diff --git a/context/data/scite/tex.properties b/context/data/scite/tex.properties deleted file mode 100644 index 3fbad41cb..000000000 --- a/context/data/scite/tex.properties +++ /dev/null @@ -1 +0,0 @@ -import scite-tex diff --git a/doc/context/manuals/allkind/mkiv-publications.tex b/doc/context/manuals/allkind/mkiv-publications.tex index 3300a0f53..0a81467fc 100644 --- a/doc/context/manuals/allkind/mkiv-publications.tex +++ b/doc/context/manuals/allkind/mkiv-publications.tex @@ -17,6 +17,8 @@ % \showframe +% \usemodule[lua-checkheads] + \usemodule[abr-02] \usemodule[set-11] diff --git a/doc/context/scripts/mkiv/context.html b/doc/context/scripts/mkiv/context.html index d285af311..0833089aa 100644 --- a/doc/context/scripts/mkiv/context.html +++ b/doc/context/scripts/mkiv/context.html @@ -75,6 +75,9 @@ <tr><th>--jiton</th><td></td><td>use luajittex with jit turned on (in most cases not faster, even slower)</td></tr> <tr><th/><td/><td/></tr> <tr><th>--once</th><td></td><td>only run once (no multipass data file is produced)</td></tr> + <tr><th>--runs</th><td></td><td>process at most this many times</td></tr> + <tr><th>--forcedruns</th><td></td><td>process this many times (permits for optimization trial runs)</td></tr> + <tr><th/><td/><td/></tr> <tr><th>--batchmode</th><td></td><td>run without stopping and do not show messages on the console</td></tr> <tr><th>--nonstopmode</th><td></td><td>run without stopping</td></tr> <tr><th>--synctex</th><td></td><td>run with synctex enabled (optional value: zipped, unzipped, 1, -1)</td></tr> diff --git a/doc/context/scripts/mkiv/context.man b/doc/context/scripts/mkiv/context.man index ea12beb06..c1808f073 100644 --- a/doc/context/scripts/mkiv/context.man +++ b/doc/context/scripts/mkiv/context.man @@ -95,6 +95,12 @@ use luajittex with jit turned on (in most cases not faster, even slower) .B --once only run once (no multipass data file is produced) .TP +.B --runs +process at most this many times +.TP +.B --forcedruns +process this many times (permits for optimization trial runs) +.TP .B --batchmode run without stopping and do not show messages on the console .TP diff --git a/doc/context/scripts/mkiv/context.xml b/doc/context/scripts/mkiv/context.xml index a3812288f..af8d5cc20 100644 --- a/doc/context/scripts/mkiv/context.xml +++ b/doc/context/scripts/mkiv/context.xml @@ -108,6 +108,14 @@ <flag name="once"> <short>only run once (no multipass data file is produced)</short> </flag> + <flag name="runs"> + <short>process at most this many times</short> + </flag> + <flag name="forcedruns"> + <short>process this many times (permits for optimization trial runs)</short> + </flag> + </subcategory> + <subcategory> <flag name="batchmode"> <short>run without stopping and do not show messages on the console</short> </flag> @@ -117,7 +125,7 @@ <flag name="synctex"> <short>run with synctex enabled (optional value: zipped, unzipped, 1, -1)</short> </flag> - </subcategory> + </subcategory> <subcategory> <flag name="generate"> <short>generate file database etc. (as luatools does)</short> diff --git a/doc/context/scripts/mkiv/mtx-context.html b/doc/context/scripts/mkiv/mtx-context.html index d285af311..0833089aa 100644 --- a/doc/context/scripts/mkiv/mtx-context.html +++ b/doc/context/scripts/mkiv/mtx-context.html @@ -75,6 +75,9 @@ <tr><th>--jiton</th><td></td><td>use luajittex with jit turned on (in most cases not faster, even slower)</td></tr> <tr><th/><td/><td/></tr> <tr><th>--once</th><td></td><td>only run once (no multipass data file is produced)</td></tr> + <tr><th>--runs</th><td></td><td>process at most this many times</td></tr> + <tr><th>--forcedruns</th><td></td><td>process this many times (permits for optimization trial runs)</td></tr> + <tr><th/><td/><td/></tr> <tr><th>--batchmode</th><td></td><td>run without stopping and do not show messages on the console</td></tr> <tr><th>--nonstopmode</th><td></td><td>run without stopping</td></tr> <tr><th>--synctex</th><td></td><td>run with synctex enabled (optional value: zipped, unzipped, 1, -1)</td></tr> diff --git a/doc/context/scripts/mkiv/mtx-context.man b/doc/context/scripts/mkiv/mtx-context.man index ea12beb06..c1808f073 100644 --- a/doc/context/scripts/mkiv/mtx-context.man +++ b/doc/context/scripts/mkiv/mtx-context.man @@ -95,6 +95,12 @@ use luajittex with jit turned on (in most cases not faster, even slower) .B --once only run once (no multipass data file is produced) .TP +.B --runs +process at most this many times +.TP +.B --forcedruns +process this many times (permits for optimization trial runs) +.TP .B --batchmode run without stopping and do not show messages on the console .TP diff --git a/doc/context/scripts/mkiv/mtx-context.xml b/doc/context/scripts/mkiv/mtx-context.xml index a3812288f..af8d5cc20 100644 --- a/doc/context/scripts/mkiv/mtx-context.xml +++ b/doc/context/scripts/mkiv/mtx-context.xml @@ -108,6 +108,14 @@ <flag name="once"> <short>only run once (no multipass data file is produced)</short> </flag> + <flag name="runs"> + <short>process at most this many times</short> + </flag> + <flag name="forcedruns"> + <short>process this many times (permits for optimization trial runs)</short> + </flag> + </subcategory> + <subcategory> <flag name="batchmode"> <short>run without stopping and do not show messages on the console</short> </flag> @@ -117,7 +125,7 @@ <flag name="synctex"> <short>run with synctex enabled (optional value: zipped, unzipped, 1, -1)</short> </flag> - </subcategory> + </subcategory> <subcategory> <flag name="generate"> <short>generate file database etc. (as luatools does)</short> diff --git a/doc/context/scripts/mkiv/mtx-scite.html b/doc/context/scripts/mkiv/mtx-scite.html index c4dd157e0..24229db73 100644 --- a/doc/context/scripts/mkiv/mtx-scite.html +++ b/doc/context/scripts/mkiv/mtx-scite.html @@ -40,6 +40,8 @@ <tr><th style="width: 10em">flag</th><th style="width: 8em">value</th><th>description</th></tr> <tr><th/><td/><td/></tr> <tr><th>--words</th><td></td><td>convert spell-*.txt into spell-*.lua</td></tr> + <tr><th>--tree</th><td></td><td>converts a tree into an html tree (--source --target --numbers)</td></tr> + <tr><th>--file</th><td></td><td>converts a file into an html tree (--source --target --numbers --lexer)</td></tr> </table> <br/> </div> diff --git a/doc/context/scripts/mkiv/mtx-scite.man b/doc/context/scripts/mkiv/mtx-scite.man index ece69a9a6..8f268c554 100644 --- a/doc/context/scripts/mkiv/mtx-scite.man +++ b/doc/context/scripts/mkiv/mtx-scite.man @@ -13,6 +13,12 @@ .TP .B --words convert spell-*.txt into spell-*.lua +.TP +.B --tree +converts a tree into an html tree (--source --target --numbers) +.TP +.B --file +converts a file into an html tree (--source --target --numbers --lexer) .SH AUTHOR More information about ConTeXt and the tools that come with it can be found at: diff --git a/doc/context/scripts/mkiv/mtx-scite.xml b/doc/context/scripts/mkiv/mtx-scite.xml index 87fe506dc..65ad8736a 100644 --- a/doc/context/scripts/mkiv/mtx-scite.xml +++ b/doc/context/scripts/mkiv/mtx-scite.xml @@ -9,6 +9,8 @@ <category name="basic"> <subcategory> <flag name="words"><short>convert spell-*.txt into spell-*.lua</short></flag> + <flag name="tree"><short>converts a tree into an html tree (--source --target --numbers)</short></flag> + <flag name="file"><short>converts a file into an html tree (--source --target --numbers --lexer)</short></flag> </subcategory> </category> </flags> diff --git a/doc/context/scripts/mkiv/mtx-update.html b/doc/context/scripts/mkiv/mtx-update.html index bda0822e9..1cc6bd3af 100644 --- a/doc/context/scripts/mkiv/mtx-update.html +++ b/doc/context/scripts/mkiv/mtx-update.html @@ -14,7 +14,7 @@ <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> <head> - <title>ConTeXt Minimals Updater 1.01</title> + <title>ConTeXt Minimals Updater 1.02</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/> <style type="text/css"> body { color: #FFFFFF; background-color: #808080; font-family: optima, verdana, futura, "lucida sans", arial, geneva, helvetica, sans; font-size: 12px; line-height: 18px; } a:link, a:active, a:visited { color: #FFFFFF; } a.dir-view:link, a.dir-view:active, a.dir-view:visited { color: #FFFFFF; text-decoration: underline; } .valid { color: #00FF00; } .invalid { color: #FF0000; } button, .commonlink, .smallbutton { font-weight: bold; font-size: 12px; text-decoration: none; color: #000000; border-color: #7F7F7F; border-style: solid; border-width: .125ex; background-color: #FFFFFF; padding: .5ex; } .smallbutton { width: 1em; } a.commonlink:link, a.commonlink:active, a.commonlink:visited, a.smalllink:link, a.smalllink:active, a.smalllink:visited { font-weight: bold; font-size: 12px; text-decoration: none; color: #000000; } h1, .title { font-style: normal; font-weight: normal; font-size: 18px; line-height: 18px; margin-bottom: 20px; } h2, .subtitle { font-style: normal; font-weight: normal; font-size: 12px; margin-top: 18px; margin-bottom: 18px; } table { line-height: 18px; font-size: 12px; margin: 0; } th { font-weight: bold; text-align: left; padding-bottom: 6px; } .tc { font-weight: bold; text-align: left; } p, li { max-width: 60em; } .empty-line { margin-top: 4px; } .more-room { margin-right: 1.5em; } .much-more-room { margin-right: 3em; } #main { position: absolute; left: 10%; top: 10%; right: 10%; bottom: 10%; z-index: 2; width: 80%; height: 80%; padding: 0%; margin: 0%; overflow: auto; border-style: none; border-width: 0; background-color: #3F3F3F; } #main-settings { margin: 12px; x_max-width: 60em; line-height: 18px; font-size: 12px; } #left { position: absolute; top : 10%; left: 0%; bottom: 0%; right: 90%; z-index: 1; width: 10%; height: 90%; padding: 0%; margin: 0%; font-size: 16px; border-style: none; border-width: 0; background-color: #4F6F6F; } #right { position: absolute; top : 0%; left: 90%; bottom: 10%; right: 0%; z-index: 1; width: 10%; height: 90%; padding: 0%; margin: 0%; font-size: 16px; border-style: none; border-width: 0; background-color: #4F6F6F; _margin-left: -15px; } #bottom { position: absolute; left: 10%; right: 0%; top: 90%; bottom: 0%; z-index: 1; width: 90%; height: 10%; padding: 0%; margin: 0%; font-size: 16px; border-style: none; border-width: 0; background-color: #6F6F8F; } #top { position: absolute; left: 0%; right: 10%; top: 0%; bottom: 90%; z-index: 1; width: 90%; height: 10%; padding: 0%; margin: 0%; font-size: 16px; border-style: none; border-width: 0; background-color: #6F6F8F; } #top-one { position: absolute; bottom: 50%; width: 100%; buggedheight: 100%; } #top-two { position: relative; margin-bottom: -9px; margin-left: 12px; margin-right: 12px; line-height: 18px; text-align: right; vertical-align: middle; } #bottom-one { position: absolute; bottom: 50%; width: 100%; buggedheight: 100%; } #bottom-two { position: relative; margin-bottom: -9px; margin-left: 12px; margin-right: 12px; line-height: 18px; text-align: left; vertical-align: middle; } #left-one { position: absolute; width: 100%; buggedheight: 100%; } #left-two { position: relative; margin-top: 12px; line-height: 18px; text-align: center; vertical-align: top; } #right-one { display: table; height: 100%; width: 100%; } #right-two { display: table-row; height: 100%; width: 100%; } #right-three { display: table-cell; width: 100%; vertical-align: bottom; _position: absolute; _top: 100%; } #right-four { text-align: center; margin-bottom: 2ex; _position: relative; _top: -100%; } #more-top { position: absolute; top: 0%; left: 90%; bottom: 90%; right: 0%; z-index: 3; width: 10%; height: 10%; padding: 0%; margin: 0%; border-style: none; border-width: 0; } #more-top-settings { text-align: center; } #more-right-settings { margin-right: 12px; margin-left: 12px; line-height: 18px; font-size: 10px; text-align: center; } #right-safari { _display: table; width: 100%; height: 100%; } @@ -24,7 +24,7 @@ </head> <body> <div id="top"> <div id="top-one"> - <div id="top-two">ConTeXt Minimals Updater 1.01 </div> + <div id="top-two">ConTeXt Minimals Updater 1.02 </div> </div> </div> <div id="bottom"> <div id="bottom-one"> diff --git a/doc/context/scripts/mkiv/mtx-update.man b/doc/context/scripts/mkiv/mtx-update.man index 7766122fb..60ba09fb5 100644 --- a/doc/context/scripts/mkiv/mtx-update.man +++ b/doc/context/scripts/mkiv/mtx-update.man @@ -1,4 +1,4 @@ -.TH "mtx-update" "1" "01-01-2014" "version 1.01" "ConTeXt Minimals Updater" +.TH "mtx-update" "1" "01-01-2014" "version 1.02" "ConTeXt Minimals Updater" .SH NAME .B mtx-update .SH SYNOPSIS diff --git a/doc/context/scripts/mkiv/mtx-update.xml b/doc/context/scripts/mkiv/mtx-update.xml index 13c25ae13..c5d9205c3 100644 --- a/doc/context/scripts/mkiv/mtx-update.xml +++ b/doc/context/scripts/mkiv/mtx-update.xml @@ -3,7 +3,7 @@ <metadata> <entry name="name">mtx-update</entry> <entry name="detail">ConTeXt Minimals Updater</entry> - <entry name="version">1.01</entry> + <entry name="version">1.02</entry> </metadata> <flags> <category name="basic"> diff --git a/metapost/context/base/metafun.mpiv b/metapost/context/base/metafun.mpiv index a113675e6..095b84b0e 100644 --- a/metapost/context/base/metafun.mpiv +++ b/metapost/context/base/metafun.mpiv @@ -19,6 +19,7 @@ input "mp-base.mpiv" ; input "mp-tool.mpiv" ; input "mp-mlib.mpiv" ; % "mp-core.mpiv" ; % todo: namespace and cleanup +input "mp-luas.mpiv" ; % experimental input "mp-page.mpiv" ; % todo: namespace and cleanup input "mp-butt.mpiv" ; % todo: namespace and cleanup input "mp-shap.mpiv" ; % will be improved diff --git a/metapost/context/base/mp-luas.mpiv b/metapost/context/base/mp-luas.mpiv new file mode 100644 index 000000000..b926b586c --- /dev/null +++ b/metapost/context/base/mp-luas.mpiv @@ -0,0 +1,90 @@ +%D \module +%D [ file=mp-luas.mpiv, +%D version=2014.04.14, +%D title=\CONTEXT\ \METAPOST\ graphics, +%D subtitle=\LUA, +%D author=Hans Hagen, +%D date=\currentdate, +%D copyright={PRAGMA ADE \& \CONTEXT\ Development Team}] +%C +%C This module is part of the \CONTEXT\ macro||package and is +%C therefore copyrighted by \PRAGMA. See mreadme.pdf for +%C details. + +if known context_luas : endinput ; fi ; + +boolean context_luas ; context_luas := true ; + +% First variant: +% +% let lua = runscript ; +% +% Second variant: +% +% vardef lua (text t) = +% runscript(for s = t : s & endfor "") +% enddef; +% +% Third variant: +% +% vardef lua (text t) = +% runscript("" for s = t : +% if string s : +% & s +% elseif numeric s : +% & decimal s +% elseif boolean s : +% & if s : "true" else "false" fi +% fi endfor) +% enddef; +% +% Fourth variant: + +vardef mlib_luas_luacall(text t) = + runscript("" for s = t : + if string s : + & s + elseif numeric s : + & decimal s + elseif boolean s : + & if s : "true" else "false" fi + fi endfor + ) +enddef ; + +vardef mlib_luas_lualist(expr c)(text t) = + save b ; boolean b ; b := false ; + runscript(c & "(" for s = t : + if b : + & "," + else : + hide(b := true) + fi + if string s : + & ditto & s & ditto + elseif numeric s : + & decimal s + elseif boolean s : + & if s : "true" else "false" fi + fi endfor & ")" + ) +enddef ; + +def luacall = mlib_luas_luacall enddef ; % why no let + +vardef lualist@#(text t) = mlib_luas_lualist(str @#)(t) enddef ; + +string mlib_luas_s ; % saves save/restore + +vardef lua@#(text t) = + mlib_luas_s := str @# ; + if length(mlib_luas_s) > 0 : + mlib_luas_lualist(mlib_luas_s,t) + else : + mlib_luas_luacall(t) + fi +enddef ; + +vardef MP@#(text t) = + mlib_luas_lualist("MP." & str @#,t) +enddef ; diff --git a/metapost/context/base/mp-mlib.mpiv b/metapost/context/base/mp-mlib.mpiv index 12840b28e..252cd5fd0 100644 --- a/metapost/context/base/mp-mlib.mpiv +++ b/metapost/context/base/mp-mlib.mpiv @@ -366,6 +366,8 @@ vardef thetextext@#(expr p,z) = % interim labeloffset := textextoffset ; if string p : thetextext@#(rawtextext(p),z) + elseif numeric p : + thetextext@#(rawtextext(decimal p),z) else : p if (mfun_labtype@# >= 10) : diff --git a/metapost/context/fonts/bidi-symbols.tex b/metapost/context/fonts/bidi-symbols.tex index 800e0e4ea..24f883b3d 100644 --- a/metapost/context/fonts/bidi-symbols.tex +++ b/metapost/context/fonts/bidi-symbols.tex @@ -1,4 +1,4 @@ -% \nopdfcompression +\nopdfcompression % At the ConTeXt 2013 meeting Taco suggested to add ActualText entries to the % shapes. It took us a bit of experimenting and the current implementation of diff --git a/scripts/context/lua/mtx-context.lua b/scripts/context/lua/mtx-context.lua index 90efb5225..b53110bb1 100644 --- a/scripts/context/lua/mtx-context.lua +++ b/scripts/context/lua/mtx-context.lua @@ -546,6 +546,7 @@ function scripts.context.run(ctxdata,filename) local a_arrange = getargument("arrange") local a_noarrange = getargument("noarrange") local a_jiton = getargument("jiton") + local a_jithash = getargument("jithash") local a_texformat = getargument("texformat") -- a_batchmode = (a_batchmode and "batchmode") or (a_nonstopmode and "nonstopmode") or nil @@ -582,7 +583,8 @@ function scripts.context.run(ctxdata,filename) formatfile, scriptfile = resolvers.locateformat(formatname) end -- - a_jiton = (a_jiton or toboolean(analysis.jiton,true)) and true or nil + a_jiton = (a_jiton or toboolean(analysis.jiton,true)) and true or nil + a_jithash = validstring(a_jithash or analysis.jithash) or nil -- if not formatfile or not scriptfile then report("warning: no format found, forcing remake (source driven)") @@ -665,6 +667,7 @@ function scripts.context.run(ctxdata,filename) ["lua"] = scriptfile, ["jobname"] = jobname, ["jiton"] = a_jiton, + ["jithash"] = a_jithash, } -- if a_synctex then diff --git a/scripts/context/lua/mtx-context.xml b/scripts/context/lua/mtx-context.xml index a3812288f..af8d5cc20 100644 --- a/scripts/context/lua/mtx-context.xml +++ b/scripts/context/lua/mtx-context.xml @@ -108,6 +108,14 @@ <flag name="once"> <short>only run once (no multipass data file is produced)</short> </flag> + <flag name="runs"> + <short>process at most this many times</short> + </flag> + <flag name="forcedruns"> + <short>process this many times (permits for optimization trial runs)</short> + </flag> + </subcategory> + <subcategory> <flag name="batchmode"> <short>run without stopping and do not show messages on the console</short> </flag> @@ -117,7 +125,7 @@ <flag name="synctex"> <short>run with synctex enabled (optional value: zipped, unzipped, 1, -1)</short> </flag> - </subcategory> + </subcategory> <subcategory> <flag name="generate"> <short>generate file database etc. (as luatools does)</short> diff --git a/scripts/context/lua/mtx-scite.lua b/scripts/context/lua/mtx-scite.lua index 972edbfe6..ae8c67387 100644 --- a/scripts/context/lua/mtx-scite.lua +++ b/scripts/context/lua/mtx-scite.lua @@ -6,6 +6,8 @@ if not modules then modules = { } end modules ['mtx-scite'] = { license = "see context related readme files" } +-- mtxrun --script scite --tree --source=t:/texmf/tex/context --target=e:/tmp/context --numbers + local P, R, S, C, Ct, Cf, Cc, Cg = lpeg.P, lpeg.R, lpeg.S, lpeg.C, lpeg.Ct, lpeg.Cf, lpeg.Cc, lpeg.Cg local lpegmatch = lpeg.match local format, lower, gmatch = string.format, string.lower, string.gmatch @@ -22,6 +24,8 @@ local helpinfo = [[ <category name="basic"> <subcategory> <flag name="words"><short>convert spell-*.txt into spell-*.lua</short></flag> + <flag name="tree"><short>converts a tree into an html tree (--source --target --numbers)</short></flag> + <flag name="file"><short>converts a file into an html tree (--source --target --numbers --lexer)</short></flag> </subcategory> </category> </flags> @@ -36,6 +40,8 @@ local application = logs.application { local report = application.report +local scite = require("util-sci") + scripts = scripts or { } scripts.scite = scripts.scite or { } @@ -241,6 +247,51 @@ function scripts.scite.words() report("you need to move the lua files to lexers/data") end +function scripts.scite.tree() + local source = environment.argument("source") + local target = environment.argument("target") + local numbers = environment.argument("numbers") + if not lfs.isdir(source) then + report("you need to pass a valid source path with --source") + return + end + if not lfs.isdir(target) then + report("you need to pass a valid target path with --target") + return + end + if source == target then + report("source and target paths must be different") + return + end + scite.converttree(source,target,numbers) +end + +function scripts.scite.file() + local source = environment.argument("source") + local target = environment.argument("target") + local lexer = environment.argument("lexer") + local numbers = environment.argument("numbers") + if source then + local target = target or file.replacesuffix(source,"html") + if source == target then + report("the source file cannot be the same as the target") + else + scite.filetohtml(source,lexer,target,numbers) + end + + else + for i=1,#environment.files do + local source = environment.files[i] + local target = file.replacesuffix(source,"html") + if source == target then + report("the source file cannot be the same as the target") + else + scite.filetohtml(source,nil,target,numbers) + end + end + end +end + -- if environment.argument("start") then -- scripts.scite.start(true) -- elseif environment.argument("test") then @@ -251,6 +302,10 @@ end if environment.argument("words") then scripts.scite.words() +elseif environment.argument("tree") then + scripts.scite.tree() +elseif environment.argument("file") then + scripts.scite.file() elseif environment.argument("exporthelp") then application.export(environment.argument("exporthelp"),environment.files[1]) else diff --git a/scripts/context/lua/mtx-update.lua b/scripts/context/lua/mtx-update.lua index c7eb74395..aedc48041 100644 --- a/scripts/context/lua/mtx-update.lua +++ b/scripts/context/lua/mtx-update.lua @@ -11,13 +11,15 @@ if not modules then modules = { } end modules ['mtx-update'] = { -- Together with Arthur Reutenauer she made sure that it worked well on all -- platforms that matter. +-- LuaTeX and LuajitTeX are now always installed together. + local helpinfo = [[ <?xml version="1.0"?> <application> <metadata> <entry name="name">mtx-update</entry> <entry name="detail">ConTeXt Minimals Updater</entry> - <entry name="version">1.01</entry> + <entry name="version">1.02</entry> </metadata> <flags> <category name="basic"> @@ -48,7 +50,7 @@ local helpinfo = [[ local application = logs.application { name = "mtx-update", - banner = "ConTeXt Minimals Updater 1.01", + banner = "ConTeXt Minimals Updater 1.02", helpinfo = helpinfo, } @@ -124,7 +126,7 @@ scripts.update.engines = { ["luatex"] = { { "fonts/new/", "texmf" }, { "bin/luatex/<platform>/", "texmf-<platform>" }, - { "bin/luajittex/<platform>/","texmf-<platform>" }, + -- { "bin/luajittex/<platform>/","texmf-<platform>" }, }, ["xetex"] = { { "base/xetex/", "texmf" }, @@ -142,7 +144,7 @@ scripts.update.engines = { { "fonts/old/", "texmf" }, { "base/xetex/", "texmf" }, { "bin/luatex/<platform>/", "texmf-<platform>" }, - { "bin/luajittex/<platform>/","texmf-<platform>" }, + -- { "bin/luajittex/<platform>/","texmf-<platform>" }, { "bin/xetex/<platform>/", "texmf-<platform>" }, { "bin/pdftex/<platform>/", "texmf-<platform>" }, }, @@ -561,9 +563,8 @@ function scripts.update.make() local formatlist = concat(table.fromhash(texformats), " ") if formatlist ~= "" then for engine in table.sortedhash(engines) do - if engine == "luatex" then + if engine == "luatex" or engine == "luajittex" then scripts.update.run(format('mtxrun --tree="%s" --script context --autogenerate --make',texroot)) - elseif engine == "luajittex" then scripts.update.run(format('mtxrun --tree="%s" --script context --autogenerate --make --engine=luajittex',texroot)) else scripts.update.run(format('mtxrun --tree="%s" --script texexec --make --all --%s %s',texroot,engine,formatlist)) diff --git a/scripts/context/lua/mtxrun.lua b/scripts/context/lua/mtxrun.lua index 3372831b3..8679aefb1 100644 --- a/scripts/context/lua/mtxrun.lua +++ b/scripts/context/lua/mtxrun.lua @@ -56,7 +56,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-lua"] = package.loaded["l-lua"] or true --- original size: 3247, stripped down to: 1763 +-- original size: 3409, stripped down to: 1763 if not modules then modules={} end modules ['l-lua']={ version=1.001, @@ -1187,7 +1187,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-table"] = package.loaded["l-table"] or true --- original size: 31142, stripped down to: 20283 +-- original size: 31860, stripped down to: 20846 if not modules then modules={} end modules ['l-table']={ version=1.001, @@ -1265,6 +1265,36 @@ local function sortedkeys(tab) return {} end end +local function sortedhashonly(tab) + if tab then + local srt,s={},0 + for key,_ in next,tab do + if type(key)=="string" then + s=s+1 + srt[s]=key + end + end + sort(srt) + return srt + else + return {} + end +end +local function sortedindexonly(tab) + if tab then + local srt,s={},0 + for key,_ in next,tab do + if type(key)=="number" then + s=s+1 + srt[s]=key + end + end + sort(srt) + return srt + else + return {} + end +end local function sortedhashkeys(tab,cmp) if tab then local srt,s={},0 @@ -1290,6 +1320,8 @@ function table.allkeys(t) return sortedkeys(keys) end table.sortedkeys=sortedkeys +table.sortedhashonly=sortedhashonly +table.sortedindexonly=sortedindexonly table.sortedhashkeys=sortedhashkeys local function nothing() end local function sortedhash(t,cmp) @@ -2078,7 +2110,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-io"] = package.loaded["l-io"] or true --- original size: 8817, stripped down to: 6340 +-- original size: 8824, stripped down to: 6347 if not modules then modules={} end modules ['l-io']={ version=1.001, @@ -2092,7 +2124,7 @@ local byte,find,gsub,format=string.byte,string.find,string.gsub,string.format local concat=table.concat local floor=math.floor local type=type -if string.find(os.getenv("PATH"),";") then +if string.find(os.getenv("PATH"),";",1,true) then io.fileseparator,io.pathseparator="\\",";" else io.fileseparator,io.pathseparator="/",":" @@ -2613,7 +2645,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-os"] = package.loaded["l-os"] or true --- original size: 16023, stripped down to: 9634 +-- original size: 16093, stripped down to: 9704 if not modules then modules={} end modules ['l-os']={ version=1.001, @@ -2703,7 +2735,7 @@ function os.resultof(command) end end if not io.fileseparator then - if find(os.getenv("PATH"),";") then + if find(os.getenv("PATH"),";",1,true) then io.fileseparator,io.pathseparator,os.type="\\",";",os.type or "mswin" else io.fileseparator,io.pathseparator,os.type="/",":",os.type or "unix" @@ -2763,7 +2795,7 @@ if platform~="" then elseif os.type=="windows" then function resolvers.platform(t,k) local platform,architecture="",os.getenv("PROCESSOR_ARCHITECTURE") or "" - if find(architecture,"AMD64") then + if find(architecture,"AMD64",1,true) then platform="win64" else platform="mswin" @@ -2775,9 +2807,9 @@ elseif os.type=="windows" then elseif name=="linux" then function resolvers.platform(t,k) local platform,architecture="",os.getenv("HOSTTYPE") or os.resultof("uname -m") or "" - if find(architecture,"x86_64") then + if find(architecture,"x86_64",1,true) then platform="linux-64" - elseif find(architecture,"ppc") then + elseif find(architecture,"ppc",1,true) then platform="linux-ppc" else platform="linux" @@ -2791,9 +2823,9 @@ elseif name=="macosx" then local platform,architecture="",os.resultof("echo $HOSTTYPE") or "" if architecture=="" then platform="osx-intel" - elseif find(architecture,"i386") then + elseif find(architecture,"i386",1,true) then platform="osx-intel" - elseif find(architecture,"x86_64") then + elseif find(architecture,"x86_64",1,true) then platform="osx-64" else platform="osx-ppc" @@ -2805,7 +2837,7 @@ elseif name=="macosx" then elseif name=="sunos" then function resolvers.platform(t,k) local platform,architecture="",os.resultof("uname -m") or "" - if find(architecture,"sparc") then + if find(architecture,"sparc",1,true) then platform="solaris-sparc" else platform="solaris-intel" @@ -2817,7 +2849,7 @@ elseif name=="sunos" then elseif name=="freebsd" then function resolvers.platform(t,k) local platform,architecture="",os.resultof("uname -m") or "" - if find(architecture,"amd64") then + if find(architecture,"amd64",1,true) then platform="freebsd-amd64" else platform="freebsd" @@ -2829,7 +2861,7 @@ elseif name=="freebsd" then elseif name=="kfreebsd" then function resolvers.platform(t,k) local platform,architecture="",os.getenv("HOSTTYPE") or os.resultof("uname -m") or "" - if find(architecture,"x86_64") then + if find(architecture,"x86_64",1,true) then platform="kfreebsd-amd64" else platform="kfreebsd-i386" @@ -2847,7 +2879,7 @@ else end end function resolvers.bits(t,k) - local bits=find(os.platform,"64") and 64 or 32 + local bits=find(os.platform,"64",1,true) and 64 or 32 os.bits=bits return bits end @@ -3735,7 +3767,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-dir"] = package.loaded["l-dir"] or true --- original size: 14768, stripped down to: 9107 +-- original size: 14788, stripped down to: 9096 if not modules then modules={} end modules ['l-dir']={ version=1.001, @@ -3759,7 +3791,7 @@ local isfile=lfs.isfile local currentdir=lfs.currentdir local chdir=lfs.chdir local mkdir=lfs.mkdir -local onwindows=os.type=="windows" or find(os.getenv("PATH"),";") +local onwindows=os.type=="windows" or find(os.getenv("PATH"),";",1,true) if not isdir then function isdir(name) local a=attributes(name) @@ -3861,7 +3893,7 @@ local function glob(str,t) local split=lpegmatch(pattern,str) if split then local root,path,base=split[1],split[2],split[3] - local recurse=find(base,"%*%*") + local recurse=find(base,"**",1,true) local start=root..path local result=lpegmatch(filter,start..base) globpattern(start,result,recurse,t) @@ -3887,7 +3919,7 @@ local function glob(str,t) local t=t or {} local action=action or function(name) t[#t+1]=name end local root,path,base=split[1],split[2],split[3] - local recurse=find(base,"%*%*") + local recurse=find(base,"**",1,true) local start=root..path local result=lpegmatch(filter,start..base) globpattern(start,result,recurse,action) @@ -3942,7 +3974,6 @@ if onwindows then str="" for i=1,n do local s=select(i,...) - local s=select(i,...) if s=="" then elseif str=="" then str=s @@ -4195,7 +4226,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-unicode"] = package.loaded["l-unicode"] or true --- original size: 33473, stripped down to: 14938 +-- original size: 33706, stripped down to: 14938 if not modules then modules={} end modules ['l-unicode']={ version=1.001, @@ -4840,7 +4871,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-str"] = package.loaded["util-str"] or true --- original size: 29502, stripped down to: 16632 +-- original size: 32843, stripped down to: 18226 if not modules then modules={} end modules ['util-str']={ version=1.001, @@ -4876,9 +4907,11 @@ end if not number then number={} end local stripper=patterns.stripzeros local function points(n) + n=tonumber(n) return (not n or n==0) and "0pt" or lpegmatch(stripper,format("%.5fpt",n/65536)) end local function basepoints(n) + n=tonumber(n) return (not n or n==0) and "0bp" or lpegmatch(stripper,format("%.5fbp",n*(7200/7227)/65536)) end number.points=points @@ -4941,11 +4974,39 @@ local pattern=Carg(1)/function(t) function strings.tabtospace(str,tab) return lpegmatch(pattern,str,1,tab or 7) end -function strings.striplong(str) - str=gsub(str,"^%s*","") - str=gsub(str,"[\n\r]+ *","\n") - return str +local newline=patterns.newline +local endofstring=patterns.endofstring +local whitespace=patterns.whitespace +local spacer=patterns.spacer +local space=spacer^0 +local nospace=space/"" +local endofline=nospace*newline +local stripend=(whitespace^1*endofstring)/"" +local normalline=(nospace*((1-space*(newline+endofstring))^1)*nospace) +local stripempty=endofline^1/"" +local normalempty=endofline^1 +local singleempty=endofline*(endofline^0/"") +local doubleempty=endofline*endofline^-1*(endofline^0/"") +local stripstart=stripempty^0 +local p_prune_normal=Cs (stripstart*(stripend+normalline+normalempty )^0 ) +local p_prune_collapse=Cs (stripstart*(stripend+normalline+doubleempty )^0 ) +local p_prune_noempty=Cs (stripstart*(stripend+normalline+singleempty )^0 ) +local p_retain_normal=Cs ((normalline+normalempty )^0 ) +local p_retain_collapse=Cs ((normalline+doubleempty )^0 ) +local p_retain_noempty=Cs ((normalline+singleempty )^0 ) +local striplinepatterns={ + ["prune"]=p_prune_normal, + ["prune and collapse"]=p_prune_collapse, + ["prune and no empty"]=p_prune_noempty, + ["retain"]=p_retain_normal, + ["retain and collapse"]=p_retain_collapse, + ["retain and no empty"]=p_retain_noempty, +} +strings.striplinepatterns=striplinepatterns +function strings.striplines(str,how) + return str and lpegmatch(how and striplinepatterns[how] or p_prune_collapse,str) or str end +strings.striplong=strings.striplines function strings.nice(str) str=gsub(str,"[:%-+_]+"," ") return str @@ -5111,7 +5172,7 @@ local format_i=function(f) if f and f~="" then return format("format('%%%si',a%s)",f,n) else - return format("format('%%i',a%s)",n) + return format("format('%%i',a%s)",n) end end local format_d=format_i @@ -5123,6 +5184,10 @@ local format_f=function(f) n=n+1 return format("format('%%%sf',a%s)",f,n) end +local format_F=function(f) + n=n+1 + return format("((a%s == 0 and '0') or (a%s == 1 and '1') or format('%%%sf',a%s))",n,n,f,n) +end local format_g=function(f) n=n+1 return format("format('%%%sg',a%s)",f,n) @@ -5337,7 +5402,7 @@ local builder=Cs { "start", ( P("%")/""*( V("!") -+V("s")+V("q")+V("i")+V("d")+V("f")+V("g")+V("G")+V("e")+V("E")+V("x")+V("X")+V("o") ++V("s")+V("q")+V("i")+V("d")+V("f")+V("F")+V("g")+V("G")+V("e")+V("E")+V("x")+V("X")+V("o") +V("c")+V("C")+V("S") +V("Q") +V("N") @@ -5357,6 +5422,7 @@ local builder=Cs { "start", ["i"]=(prefix_any*P("i"))/format_i, ["d"]=(prefix_any*P("d"))/format_d, ["f"]=(prefix_any*P("f"))/format_f, + ["F"]=(prefix_any*P("F"))/format_F, ["g"]=(prefix_any*P("g"))/format_g, ["G"]=(prefix_any*P("G"))/format_G, ["e"]=(prefix_any*P("e"))/format_e, @@ -5404,7 +5470,7 @@ local function make(t,str) f=loadstripped(p)() else n=0 - p=lpegmatch(builder,str,1,"..",t._extensions_) + p=lpegmatch(builder,str,1,t._connector_,t._extensions_) if n>0 then p=format(template,preamble,t._preamble_,arguments[n],p) f=loadstripped(p,t._environment_)() @@ -5420,18 +5486,18 @@ local function use(t,fmt,...) end strings.formatters={} if _LUAVERSION<5.2 then - function strings.formatters.new() - local t={ _extensions_={},_preamble_=preamble,_environment_={},_type_="formatter" } + function strings.formatters.new(noconcat) + local t={ _type_="formatter",_connector_=noconcat and "," or "..",_extensions_={},_preamble_=preamble,_environment_={} } setmetatable(t,{ __index=make,__call=use }) return t end else - function strings.formatters.new() + function strings.formatters.new(noconcat) local e={} for k,v in next,environment do e[k]=v end - local t={ _extensions_={},_preamble_="",_environment_=e,_type_="formatter" } + local t={ _type_="formatter",_connector_=noconcat and "," or "..",_extensions_={},_preamble_="",_environment_=e } setmetatable(t,{ __index=make,__call=use }) return t end @@ -5473,7 +5539,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-tab"] = package.loaded["util-tab"] or true --- original size: 23980, stripped down to: 16119 +-- original size: 23985, stripped down to: 16069 if not modules then modules={} end modules ['util-tab']={ version=1.001, @@ -5494,27 +5560,29 @@ local sortedkeys,sortedpairs=table.sortedkeys,table.sortedpairs local formatters=string.formatters local utftoeight=utf.toeight local splitter=lpeg.tsplitat(".") -function tables.definetable(target,nofirst,nolast) - local composed,shortcut,t=nil,nil,{} +function utilities.tables.definetable(target,nofirst,nolast) + local composed,t=nil,{} local snippets=lpegmatch(splitter,target) for i=1,#snippets-(nolast and 1 or 0) do local name=snippets[i] if composed then - composed=shortcut.."."..name - shortcut=shortcut.."_"..name - t[#t+1]=formatters["local %s = %s if not %s then %s = { } %s = %s end"](shortcut,composed,shortcut,shortcut,composed,shortcut) + composed=composed.."."..name + t[#t+1]=formatters["if not %s then %s = { } end"](composed,composed) else composed=name - shortcut=name if not nofirst then t[#t+1]=formatters["%s = %s or { }"](composed,composed) end end end - if nolast then - composed=shortcut.."."..snippets[#snippets] + if composed then + if nolast then + composed=composed.."."..snippets[#snippets] + end + return concat(t,"\n"),composed + else + return "",target end - return concat(t,"\n"),composed end function tables.definedtable(...) local t=_G @@ -5541,7 +5609,7 @@ function tables.accesstable(target,root) end function tables.migratetable(target,v,root) local t=root or _G - local names=string.split(target,".") + local names=lpegmatch(splitter,target) for i=1,#names-1 do local name=names[i] t[name]=t[name] or {} @@ -6230,7 +6298,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-prs"] = package.loaded["util-prs"] or true --- original size: 19604, stripped down to: 13998 +-- original size: 19618, stripped down to: 14012 if not modules then modules={} end modules ['util-prs']={ version=1.001, @@ -6375,12 +6443,12 @@ function parsers.settings_to_array(str,strict) elseif not str or str=="" then return {} elseif strict then - if find(str,"{") then + if find(str,"{",1,true) then return lpegmatch(pattern,str) else return { str } end - elseif find(str,",") then + elseif find(str,",",1,true) then return lpegmatch(pattern,str) else return { str } @@ -7112,7 +7180,7 @@ do -- create closure to overcome 200 locals limit package.loaded["trac-log"] = package.loaded["trac-log"] or true --- original size: 25391, stripped down to: 16561 +-- original size: 25607, stripped down to: 16617 if not modules then modules={} end modules ['trac-log']={ version=1.001, @@ -7466,9 +7534,10 @@ local function setblocked(category,value) v.state=value end else - states=utilities.parsers.settings_to_hash(category) + states=utilities.parsers.settings_to_hash(category,type(states)=="table" and states or nil) for c,_ in next,states do - if data[c] then + local v=data[c] + if v then v.state=value else c=topattern(c,true,true) @@ -7747,7 +7816,7 @@ do -- create closure to overcome 200 locals limit package.loaded["trac-inf"] = package.loaded["trac-inf"] or true --- original size: 6643, stripped down to: 5272 +-- original size: 7011, stripped down to: 5590 if not modules then modules={} end modules ['trac-inf']={ version=1.001, @@ -7757,7 +7826,7 @@ if not modules then modules={} end modules ['trac-inf']={ license="see context related readme files" } local type,tonumber,select=type,tonumber,select -local format,lower=string.format,string.lower +local format,lower,find=string.format,string.lower,string.find local concat=table.concat local clock=os.gettimeofday or os.clock local setmetatableindex=table.setmetatableindex @@ -7848,10 +7917,8 @@ function statistics.show() if statistics.enable then local register=statistics.register register("used platform",function() - local mask=lua.mask or "ascii" - return format("%s, type: %s, binary subtree: %s, symbol mask: %s (%s)", - os.platform or "unknown",os.type or "unknown",environment.texos or "unknown", - mask,mask=="utf" and "τεχ" or "tex") + return format("%s, type: %s, binary subtree: %s", + os.platform or "unknown",os.type or "unknown",environment.texos or "unknown") end) register("luatex banner",function() return lower(status.banner) @@ -7864,14 +7931,23 @@ function statistics.show() return format("%s direct, %s indirect, %s total",total-indirect,indirect,total) end) if jit then - local status={ jit.status() } - if status[1] then - register("luajit status",function() - return concat(status," ",2) - end) + local jitstatus={ jit.status() } + if jitstatus[1] then + register("luajit options",concat(jitstatus," ",2)) end end - register("current memory usage",statistics.memused) + register("lua properties",function() + local list=status.list() + local hashchar=tonumber(list.luatex_hashchars) + local mask=lua.mask or "ascii" + return format("engine: %s, used memory: %s, hash type: %s, hash chars: min(%s,40), symbol mask: %s (%s)", + jit and "luajit" or "lua", + statistics.memused(), + list.luatex_hashtype or "default", + hashchar and 2^hashchar or "unknown", + mask, + mask=="utf" and "τεχ" or "tex") + end) register("runtime",statistics.runtime) logs.newline() for i=1,#statusinfo do @@ -8616,7 +8692,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-env"] = package.loaded["util-env"] or true --- original size: 8807, stripped down to: 5085 +-- original size: 8814, stripped down to: 5092 if not modules then modules={} end modules ['util-env']={ version=1.001, @@ -8753,7 +8829,7 @@ function environment.reconstructcommandline(arg,noquote) a=resolvers.resolve(a) a=unquoted(a) a=gsub(a,'"','\\"') - if find(a," ") then + if find(a," ",1,true) then result[#result+1]=quoted(a) else result[#result+1]=a @@ -8813,7 +8889,7 @@ do -- create closure to overcome 200 locals limit package.loaded["luat-env"] = package.loaded["luat-env"] or true --- original size: 5930, stripped down to: 4235 +-- original size: 6174, stripped down to: 4141 if not modules then modules={} end modules ['luat-env']={ version=1.001, @@ -8891,15 +8967,13 @@ function environment.luafilechunk(filename,silent) filename=file.replacesuffix(filename,"lua") local fullname=environment.luafile(filename) if fullname and fullname~="" then - local data=luautilities.loadedluacode(fullname,strippable,filename) - if trace_locating then + local data=luautilities.loadedluacode(fullname,strippable,filename) + if not silent then report_lua("loading file %a %s",fullname,not data and "failed" or "succeeded") - elseif not silent then - texio.write("<",data and "+ " or "- ",fullname,">") end return data else - if trace_locating then + if not silent then report_lua("unknown file %a",filename) end return nil @@ -9955,7 +10029,7 @@ do -- create closure to overcome 200 locals limit package.loaded["lxml-lpt"] = package.loaded["lxml-lpt"] or true --- original size: 48956, stripped down to: 30516 +-- original size: 48030, stripped down to: 30595 if not modules then modules={} end modules ['lxml-lpt']={ version=1.001, @@ -10936,8 +11010,13 @@ function xml.elements(root,pattern,reverse) local collected=applylpath(root,pattern) if not collected then return dummy - elseif reverse then - local c=#collected+1 + end + local n=#collected + if n==0 then + return dummy + end + if reverse then + local c=n+1 return function() if c>1 then c=c-1 @@ -10947,7 +11026,7 @@ function xml.elements(root,pattern,reverse) end end else - local n,c=#collected,0 + local c=0 return function() if c<n then c=c+1 @@ -10962,8 +11041,13 @@ function xml.collected(root,pattern,reverse) local collected=applylpath(root,pattern) if not collected then return dummy - elseif reverse then - local c=#collected+1 + end + local n=#collected + if n==0 then + return dummy + end + if reverse then + local c=n+1 return function() if c>1 then c=c-1 @@ -10971,7 +11055,7 @@ function xml.collected(root,pattern,reverse) end end else - local n,c=#collected,0 + local c=0 return function() if c<n then c=c+1 @@ -10986,7 +11070,7 @@ function xml.inspect(collection,pattern) report_lpath("pattern: %s\n\n%s\n",pattern,xml.tostring(e)) end end -local function split(e) +local function split(e) local dt=e.dt if dt then for i=1,#dt do @@ -12326,7 +12410,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-ini"] = package.loaded["data-ini"] or true --- original size: 7898, stripped down to: 5501 +-- original size: 7927, stripped down to: 5528 if not modules then modules={} end modules ['data-ini']={ version=1.001, @@ -12470,7 +12554,7 @@ if not texroot or texroot=="" then ossetenv('TEXROOT',texroot) end environment.texroot=file.collapsepath(texroot) -if profiler then +if type(profiler)=="table" and not jit then directives.register("system.profile",function() profiler.start("luatex-profile.log") end) @@ -12488,7 +12572,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-exp"] = package.loaded["data-exp"] or true --- original size: 15303, stripped down to: 9716 +-- original size: 15317, stripped down to: 9723 if not modules then modules={} end modules ['data-exp']={ version=1.001, @@ -12610,7 +12694,7 @@ function resolvers.cleanpath(str) report_expansions("no home dir set, ignoring dependent paths") end function resolvers.cleanpath(str) - if not str or find(str,"~") then + if not str or find(str,"~",1,true) then return "" else return lpegmatch(cleanup,str) @@ -13488,7 +13572,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-met"] = package.loaded["data-met"] or true --- original size: 5453, stripped down to: 4007 +-- original size: 5460, stripped down to: 4014 if not modules then modules={} end modules ['data-met']={ version=1.100, @@ -13517,7 +13601,7 @@ local function splitmethod(filename) return filename end filename=file.collapsepath(filename,".") - if not find(filename,"://") then + if not find(filename,"://",1,true) then return { scheme="file",path=filename,original=filename,filename=filename } end local specification=url.hashed(filename) @@ -13607,7 +13691,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-res"] = package.loaded["data-res"] or true --- original size: 61799, stripped down to: 42957 +-- original size: 61824, stripped down to: 42982 if not modules then modules={} end modules ['data-res']={ version=1.001, @@ -13838,7 +13922,7 @@ local function identify_configuration_files() local realname=resolvers.resolve(filename) if trace_locating then local fullpath=gsub(resolvers.resolve(collapsepath(filepath)),"//","/") - local weirdpath=find(fullpath,"/texmf.+/texmf") or not find(fullpath,"/web2c") + local weirdpath=find(fullpath,"/texmf.+/texmf") or not find(fullpath,"/web2c",1,true) report_resolving("looking for %a on %s path %a from specification %a",luacnfname,weirdpath and "weird" or "given",fullpath,filepath) end if lfs.isfile(realname) then @@ -14427,7 +14511,7 @@ local function find_direct(filename,allresults) end end local function find_wildcard(filename,allresults) - if find(filename,'%*') then + if find(filename,'*',1,true) then if trace_locating then report_resolving("checking wildcard %a",filename) end @@ -14573,7 +14657,7 @@ local function find_intree(filename,filetype,wantedfiles,allresults) local scheme=url.hasscheme(pathname) if not scheme or scheme=="file" then local pname=gsub(pathname,"%.%*$",'') - if not find(pname,"%*") then + if not find(pname,"*",1,true) then if can_be_dir(pname) then for k=1,#wantedfiles do local w=wantedfiles[k] @@ -14842,7 +14926,7 @@ local function findwildcardfiles(filename,allresults,result) local path=lower(lpegmatch(makewildcard,dirn) or dirn) local name=lower(lpegmatch(makewildcard,base) or base) local files,done=instance.files,false - if find(name,"%*") then + if find(name,"*",1,true) then local hashes=instance.hashes for k=1,#hashes do local hash=hashes[k] @@ -15885,7 +15969,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-sch"] = package.loaded["data-sch"] or true --- original size: 6202, stripped down to: 5149 +-- original size: 6213, stripped down to: 5160 if not modules then modules={} end modules ['data-sch']={ version=1.001, @@ -15928,7 +16012,7 @@ function resolvers.schemes.cleanname(specification) end local cached,loaded,reused,thresholds,handlers={},{},{},{},{} local function runcurl(name,cachename) - local command="curl --silent --create-dirs --output "..cachename.." "..name + local command="curl --silent --insecure --create-dirs --output "..cachename.." "..name os.spawn(command) end local function fetch(specification) @@ -16791,8 +16875,8 @@ end -- of closure -- used libraries : l-lua.lua l-package.lua l-lpeg.lua l-function.lua l-string.lua l-table.lua l-io.lua l-number.lua l-set.lua l-os.lua l-file.lua l-gzip.lua l-md5.lua l-url.lua l-dir.lua l-boolean.lua l-unicode.lua l-math.lua util-str.lua util-tab.lua util-sto.lua util-prs.lua util-fmt.lua trac-set.lua trac-log.lua trac-inf.lua trac-pro.lua util-lua.lua util-deb.lua util-mrg.lua util-tpl.lua util-env.lua luat-env.lua lxml-tab.lua lxml-lpt.lua lxml-mis.lua lxml-aux.lua lxml-xml.lua trac-xml.lua data-ini.lua data-exp.lua data-env.lua data-tmp.lua data-met.lua data-res.lua data-pre.lua data-inp.lua data-out.lua data-fil.lua data-con.lua data-use.lua data-zip.lua data-tre.lua data-sch.lua data-lua.lua data-aux.lua data-tmf.lua data-lst.lua util-lib.lua luat-sta.lua luat-fmt.lua -- skipped libraries : - --- original bytes : 689993 --- stripped bytes : 244562 +-- original bytes : 694558 +-- stripped bytes : 246497 -- end library merge diff --git a/scripts/context/ruby/texexec.rb b/scripts/context/ruby/texexec.rb index c673cb46b..7f8298c09 100644 --- a/scripts/context/ruby/texexec.rb +++ b/scripts/context/ruby/texexec.rb @@ -685,22 +685,22 @@ end # so far for compatibility, will move to tex -@@extrastringvars = [ +extrastringvars = [ 'pages', 'background', 'backspace', 'topspace', 'boxtype', 'tempdir','bannerheight', 'printformat', 'method', 'scale', 'selection', 'combination', 'textwidth', 'addempty', 'logfile', 'startline', 'endline', 'startcolumn', 'endcolumn', 'scale' ] -@@extrabooleanvars = [ +extrabooleanvars = [ 'centerpage', 'noduplex', 'color', 'pretty', 'fullscreen', 'screensaver', 'markings' ] if job = TEX.new(logger) then - job.setextrastringvars(@@extrastringvars) - job.setextrabooleanvars(@@extrabooleanvars) + job.setextrastringvars(extrastringvars) + job.setextrabooleanvars(extrabooleanvars) job.booleanvars.each do |k| commandline.registerflag(k) diff --git a/scripts/context/stubs/mswin/mtxrun.lua b/scripts/context/stubs/mswin/mtxrun.lua index 3372831b3..8679aefb1 100644 --- a/scripts/context/stubs/mswin/mtxrun.lua +++ b/scripts/context/stubs/mswin/mtxrun.lua @@ -56,7 +56,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-lua"] = package.loaded["l-lua"] or true --- original size: 3247, stripped down to: 1763 +-- original size: 3409, stripped down to: 1763 if not modules then modules={} end modules ['l-lua']={ version=1.001, @@ -1187,7 +1187,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-table"] = package.loaded["l-table"] or true --- original size: 31142, stripped down to: 20283 +-- original size: 31860, stripped down to: 20846 if not modules then modules={} end modules ['l-table']={ version=1.001, @@ -1265,6 +1265,36 @@ local function sortedkeys(tab) return {} end end +local function sortedhashonly(tab) + if tab then + local srt,s={},0 + for key,_ in next,tab do + if type(key)=="string" then + s=s+1 + srt[s]=key + end + end + sort(srt) + return srt + else + return {} + end +end +local function sortedindexonly(tab) + if tab then + local srt,s={},0 + for key,_ in next,tab do + if type(key)=="number" then + s=s+1 + srt[s]=key + end + end + sort(srt) + return srt + else + return {} + end +end local function sortedhashkeys(tab,cmp) if tab then local srt,s={},0 @@ -1290,6 +1320,8 @@ function table.allkeys(t) return sortedkeys(keys) end table.sortedkeys=sortedkeys +table.sortedhashonly=sortedhashonly +table.sortedindexonly=sortedindexonly table.sortedhashkeys=sortedhashkeys local function nothing() end local function sortedhash(t,cmp) @@ -2078,7 +2110,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-io"] = package.loaded["l-io"] or true --- original size: 8817, stripped down to: 6340 +-- original size: 8824, stripped down to: 6347 if not modules then modules={} end modules ['l-io']={ version=1.001, @@ -2092,7 +2124,7 @@ local byte,find,gsub,format=string.byte,string.find,string.gsub,string.format local concat=table.concat local floor=math.floor local type=type -if string.find(os.getenv("PATH"),";") then +if string.find(os.getenv("PATH"),";",1,true) then io.fileseparator,io.pathseparator="\\",";" else io.fileseparator,io.pathseparator="/",":" @@ -2613,7 +2645,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-os"] = package.loaded["l-os"] or true --- original size: 16023, stripped down to: 9634 +-- original size: 16093, stripped down to: 9704 if not modules then modules={} end modules ['l-os']={ version=1.001, @@ -2703,7 +2735,7 @@ function os.resultof(command) end end if not io.fileseparator then - if find(os.getenv("PATH"),";") then + if find(os.getenv("PATH"),";",1,true) then io.fileseparator,io.pathseparator,os.type="\\",";",os.type or "mswin" else io.fileseparator,io.pathseparator,os.type="/",":",os.type or "unix" @@ -2763,7 +2795,7 @@ if platform~="" then elseif os.type=="windows" then function resolvers.platform(t,k) local platform,architecture="",os.getenv("PROCESSOR_ARCHITECTURE") or "" - if find(architecture,"AMD64") then + if find(architecture,"AMD64",1,true) then platform="win64" else platform="mswin" @@ -2775,9 +2807,9 @@ elseif os.type=="windows" then elseif name=="linux" then function resolvers.platform(t,k) local platform,architecture="",os.getenv("HOSTTYPE") or os.resultof("uname -m") or "" - if find(architecture,"x86_64") then + if find(architecture,"x86_64",1,true) then platform="linux-64" - elseif find(architecture,"ppc") then + elseif find(architecture,"ppc",1,true) then platform="linux-ppc" else platform="linux" @@ -2791,9 +2823,9 @@ elseif name=="macosx" then local platform,architecture="",os.resultof("echo $HOSTTYPE") or "" if architecture=="" then platform="osx-intel" - elseif find(architecture,"i386") then + elseif find(architecture,"i386",1,true) then platform="osx-intel" - elseif find(architecture,"x86_64") then + elseif find(architecture,"x86_64",1,true) then platform="osx-64" else platform="osx-ppc" @@ -2805,7 +2837,7 @@ elseif name=="macosx" then elseif name=="sunos" then function resolvers.platform(t,k) local platform,architecture="",os.resultof("uname -m") or "" - if find(architecture,"sparc") then + if find(architecture,"sparc",1,true) then platform="solaris-sparc" else platform="solaris-intel" @@ -2817,7 +2849,7 @@ elseif name=="sunos" then elseif name=="freebsd" then function resolvers.platform(t,k) local platform,architecture="",os.resultof("uname -m") or "" - if find(architecture,"amd64") then + if find(architecture,"amd64",1,true) then platform="freebsd-amd64" else platform="freebsd" @@ -2829,7 +2861,7 @@ elseif name=="freebsd" then elseif name=="kfreebsd" then function resolvers.platform(t,k) local platform,architecture="",os.getenv("HOSTTYPE") or os.resultof("uname -m") or "" - if find(architecture,"x86_64") then + if find(architecture,"x86_64",1,true) then platform="kfreebsd-amd64" else platform="kfreebsd-i386" @@ -2847,7 +2879,7 @@ else end end function resolvers.bits(t,k) - local bits=find(os.platform,"64") and 64 or 32 + local bits=find(os.platform,"64",1,true) and 64 or 32 os.bits=bits return bits end @@ -3735,7 +3767,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-dir"] = package.loaded["l-dir"] or true --- original size: 14768, stripped down to: 9107 +-- original size: 14788, stripped down to: 9096 if not modules then modules={} end modules ['l-dir']={ version=1.001, @@ -3759,7 +3791,7 @@ local isfile=lfs.isfile local currentdir=lfs.currentdir local chdir=lfs.chdir local mkdir=lfs.mkdir -local onwindows=os.type=="windows" or find(os.getenv("PATH"),";") +local onwindows=os.type=="windows" or find(os.getenv("PATH"),";",1,true) if not isdir then function isdir(name) local a=attributes(name) @@ -3861,7 +3893,7 @@ local function glob(str,t) local split=lpegmatch(pattern,str) if split then local root,path,base=split[1],split[2],split[3] - local recurse=find(base,"%*%*") + local recurse=find(base,"**",1,true) local start=root..path local result=lpegmatch(filter,start..base) globpattern(start,result,recurse,t) @@ -3887,7 +3919,7 @@ local function glob(str,t) local t=t or {} local action=action or function(name) t[#t+1]=name end local root,path,base=split[1],split[2],split[3] - local recurse=find(base,"%*%*") + local recurse=find(base,"**",1,true) local start=root..path local result=lpegmatch(filter,start..base) globpattern(start,result,recurse,action) @@ -3942,7 +3974,6 @@ if onwindows then str="" for i=1,n do local s=select(i,...) - local s=select(i,...) if s=="" then elseif str=="" then str=s @@ -4195,7 +4226,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-unicode"] = package.loaded["l-unicode"] or true --- original size: 33473, stripped down to: 14938 +-- original size: 33706, stripped down to: 14938 if not modules then modules={} end modules ['l-unicode']={ version=1.001, @@ -4840,7 +4871,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-str"] = package.loaded["util-str"] or true --- original size: 29502, stripped down to: 16632 +-- original size: 32843, stripped down to: 18226 if not modules then modules={} end modules ['util-str']={ version=1.001, @@ -4876,9 +4907,11 @@ end if not number then number={} end local stripper=patterns.stripzeros local function points(n) + n=tonumber(n) return (not n or n==0) and "0pt" or lpegmatch(stripper,format("%.5fpt",n/65536)) end local function basepoints(n) + n=tonumber(n) return (not n or n==0) and "0bp" or lpegmatch(stripper,format("%.5fbp",n*(7200/7227)/65536)) end number.points=points @@ -4941,11 +4974,39 @@ local pattern=Carg(1)/function(t) function strings.tabtospace(str,tab) return lpegmatch(pattern,str,1,tab or 7) end -function strings.striplong(str) - str=gsub(str,"^%s*","") - str=gsub(str,"[\n\r]+ *","\n") - return str +local newline=patterns.newline +local endofstring=patterns.endofstring +local whitespace=patterns.whitespace +local spacer=patterns.spacer +local space=spacer^0 +local nospace=space/"" +local endofline=nospace*newline +local stripend=(whitespace^1*endofstring)/"" +local normalline=(nospace*((1-space*(newline+endofstring))^1)*nospace) +local stripempty=endofline^1/"" +local normalempty=endofline^1 +local singleempty=endofline*(endofline^0/"") +local doubleempty=endofline*endofline^-1*(endofline^0/"") +local stripstart=stripempty^0 +local p_prune_normal=Cs (stripstart*(stripend+normalline+normalempty )^0 ) +local p_prune_collapse=Cs (stripstart*(stripend+normalline+doubleempty )^0 ) +local p_prune_noempty=Cs (stripstart*(stripend+normalline+singleempty )^0 ) +local p_retain_normal=Cs ((normalline+normalempty )^0 ) +local p_retain_collapse=Cs ((normalline+doubleempty )^0 ) +local p_retain_noempty=Cs ((normalline+singleempty )^0 ) +local striplinepatterns={ + ["prune"]=p_prune_normal, + ["prune and collapse"]=p_prune_collapse, + ["prune and no empty"]=p_prune_noempty, + ["retain"]=p_retain_normal, + ["retain and collapse"]=p_retain_collapse, + ["retain and no empty"]=p_retain_noempty, +} +strings.striplinepatterns=striplinepatterns +function strings.striplines(str,how) + return str and lpegmatch(how and striplinepatterns[how] or p_prune_collapse,str) or str end +strings.striplong=strings.striplines function strings.nice(str) str=gsub(str,"[:%-+_]+"," ") return str @@ -5111,7 +5172,7 @@ local format_i=function(f) if f and f~="" then return format("format('%%%si',a%s)",f,n) else - return format("format('%%i',a%s)",n) + return format("format('%%i',a%s)",n) end end local format_d=format_i @@ -5123,6 +5184,10 @@ local format_f=function(f) n=n+1 return format("format('%%%sf',a%s)",f,n) end +local format_F=function(f) + n=n+1 + return format("((a%s == 0 and '0') or (a%s == 1 and '1') or format('%%%sf',a%s))",n,n,f,n) +end local format_g=function(f) n=n+1 return format("format('%%%sg',a%s)",f,n) @@ -5337,7 +5402,7 @@ local builder=Cs { "start", ( P("%")/""*( V("!") -+V("s")+V("q")+V("i")+V("d")+V("f")+V("g")+V("G")+V("e")+V("E")+V("x")+V("X")+V("o") ++V("s")+V("q")+V("i")+V("d")+V("f")+V("F")+V("g")+V("G")+V("e")+V("E")+V("x")+V("X")+V("o") +V("c")+V("C")+V("S") +V("Q") +V("N") @@ -5357,6 +5422,7 @@ local builder=Cs { "start", ["i"]=(prefix_any*P("i"))/format_i, ["d"]=(prefix_any*P("d"))/format_d, ["f"]=(prefix_any*P("f"))/format_f, + ["F"]=(prefix_any*P("F"))/format_F, ["g"]=(prefix_any*P("g"))/format_g, ["G"]=(prefix_any*P("G"))/format_G, ["e"]=(prefix_any*P("e"))/format_e, @@ -5404,7 +5470,7 @@ local function make(t,str) f=loadstripped(p)() else n=0 - p=lpegmatch(builder,str,1,"..",t._extensions_) + p=lpegmatch(builder,str,1,t._connector_,t._extensions_) if n>0 then p=format(template,preamble,t._preamble_,arguments[n],p) f=loadstripped(p,t._environment_)() @@ -5420,18 +5486,18 @@ local function use(t,fmt,...) end strings.formatters={} if _LUAVERSION<5.2 then - function strings.formatters.new() - local t={ _extensions_={},_preamble_=preamble,_environment_={},_type_="formatter" } + function strings.formatters.new(noconcat) + local t={ _type_="formatter",_connector_=noconcat and "," or "..",_extensions_={},_preamble_=preamble,_environment_={} } setmetatable(t,{ __index=make,__call=use }) return t end else - function strings.formatters.new() + function strings.formatters.new(noconcat) local e={} for k,v in next,environment do e[k]=v end - local t={ _extensions_={},_preamble_="",_environment_=e,_type_="formatter" } + local t={ _type_="formatter",_connector_=noconcat and "," or "..",_extensions_={},_preamble_="",_environment_=e } setmetatable(t,{ __index=make,__call=use }) return t end @@ -5473,7 +5539,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-tab"] = package.loaded["util-tab"] or true --- original size: 23980, stripped down to: 16119 +-- original size: 23985, stripped down to: 16069 if not modules then modules={} end modules ['util-tab']={ version=1.001, @@ -5494,27 +5560,29 @@ local sortedkeys,sortedpairs=table.sortedkeys,table.sortedpairs local formatters=string.formatters local utftoeight=utf.toeight local splitter=lpeg.tsplitat(".") -function tables.definetable(target,nofirst,nolast) - local composed,shortcut,t=nil,nil,{} +function utilities.tables.definetable(target,nofirst,nolast) + local composed,t=nil,{} local snippets=lpegmatch(splitter,target) for i=1,#snippets-(nolast and 1 or 0) do local name=snippets[i] if composed then - composed=shortcut.."."..name - shortcut=shortcut.."_"..name - t[#t+1]=formatters["local %s = %s if not %s then %s = { } %s = %s end"](shortcut,composed,shortcut,shortcut,composed,shortcut) + composed=composed.."."..name + t[#t+1]=formatters["if not %s then %s = { } end"](composed,composed) else composed=name - shortcut=name if not nofirst then t[#t+1]=formatters["%s = %s or { }"](composed,composed) end end end - if nolast then - composed=shortcut.."."..snippets[#snippets] + if composed then + if nolast then + composed=composed.."."..snippets[#snippets] + end + return concat(t,"\n"),composed + else + return "",target end - return concat(t,"\n"),composed end function tables.definedtable(...) local t=_G @@ -5541,7 +5609,7 @@ function tables.accesstable(target,root) end function tables.migratetable(target,v,root) local t=root or _G - local names=string.split(target,".") + local names=lpegmatch(splitter,target) for i=1,#names-1 do local name=names[i] t[name]=t[name] or {} @@ -6230,7 +6298,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-prs"] = package.loaded["util-prs"] or true --- original size: 19604, stripped down to: 13998 +-- original size: 19618, stripped down to: 14012 if not modules then modules={} end modules ['util-prs']={ version=1.001, @@ -6375,12 +6443,12 @@ function parsers.settings_to_array(str,strict) elseif not str or str=="" then return {} elseif strict then - if find(str,"{") then + if find(str,"{",1,true) then return lpegmatch(pattern,str) else return { str } end - elseif find(str,",") then + elseif find(str,",",1,true) then return lpegmatch(pattern,str) else return { str } @@ -7112,7 +7180,7 @@ do -- create closure to overcome 200 locals limit package.loaded["trac-log"] = package.loaded["trac-log"] or true --- original size: 25391, stripped down to: 16561 +-- original size: 25607, stripped down to: 16617 if not modules then modules={} end modules ['trac-log']={ version=1.001, @@ -7466,9 +7534,10 @@ local function setblocked(category,value) v.state=value end else - states=utilities.parsers.settings_to_hash(category) + states=utilities.parsers.settings_to_hash(category,type(states)=="table" and states or nil) for c,_ in next,states do - if data[c] then + local v=data[c] + if v then v.state=value else c=topattern(c,true,true) @@ -7747,7 +7816,7 @@ do -- create closure to overcome 200 locals limit package.loaded["trac-inf"] = package.loaded["trac-inf"] or true --- original size: 6643, stripped down to: 5272 +-- original size: 7011, stripped down to: 5590 if not modules then modules={} end modules ['trac-inf']={ version=1.001, @@ -7757,7 +7826,7 @@ if not modules then modules={} end modules ['trac-inf']={ license="see context related readme files" } local type,tonumber,select=type,tonumber,select -local format,lower=string.format,string.lower +local format,lower,find=string.format,string.lower,string.find local concat=table.concat local clock=os.gettimeofday or os.clock local setmetatableindex=table.setmetatableindex @@ -7848,10 +7917,8 @@ function statistics.show() if statistics.enable then local register=statistics.register register("used platform",function() - local mask=lua.mask or "ascii" - return format("%s, type: %s, binary subtree: %s, symbol mask: %s (%s)", - os.platform or "unknown",os.type or "unknown",environment.texos or "unknown", - mask,mask=="utf" and "τεχ" or "tex") + return format("%s, type: %s, binary subtree: %s", + os.platform or "unknown",os.type or "unknown",environment.texos or "unknown") end) register("luatex banner",function() return lower(status.banner) @@ -7864,14 +7931,23 @@ function statistics.show() return format("%s direct, %s indirect, %s total",total-indirect,indirect,total) end) if jit then - local status={ jit.status() } - if status[1] then - register("luajit status",function() - return concat(status," ",2) - end) + local jitstatus={ jit.status() } + if jitstatus[1] then + register("luajit options",concat(jitstatus," ",2)) end end - register("current memory usage",statistics.memused) + register("lua properties",function() + local list=status.list() + local hashchar=tonumber(list.luatex_hashchars) + local mask=lua.mask or "ascii" + return format("engine: %s, used memory: %s, hash type: %s, hash chars: min(%s,40), symbol mask: %s (%s)", + jit and "luajit" or "lua", + statistics.memused(), + list.luatex_hashtype or "default", + hashchar and 2^hashchar or "unknown", + mask, + mask=="utf" and "τεχ" or "tex") + end) register("runtime",statistics.runtime) logs.newline() for i=1,#statusinfo do @@ -8616,7 +8692,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-env"] = package.loaded["util-env"] or true --- original size: 8807, stripped down to: 5085 +-- original size: 8814, stripped down to: 5092 if not modules then modules={} end modules ['util-env']={ version=1.001, @@ -8753,7 +8829,7 @@ function environment.reconstructcommandline(arg,noquote) a=resolvers.resolve(a) a=unquoted(a) a=gsub(a,'"','\\"') - if find(a," ") then + if find(a," ",1,true) then result[#result+1]=quoted(a) else result[#result+1]=a @@ -8813,7 +8889,7 @@ do -- create closure to overcome 200 locals limit package.loaded["luat-env"] = package.loaded["luat-env"] or true --- original size: 5930, stripped down to: 4235 +-- original size: 6174, stripped down to: 4141 if not modules then modules={} end modules ['luat-env']={ version=1.001, @@ -8891,15 +8967,13 @@ function environment.luafilechunk(filename,silent) filename=file.replacesuffix(filename,"lua") local fullname=environment.luafile(filename) if fullname and fullname~="" then - local data=luautilities.loadedluacode(fullname,strippable,filename) - if trace_locating then + local data=luautilities.loadedluacode(fullname,strippable,filename) + if not silent then report_lua("loading file %a %s",fullname,not data and "failed" or "succeeded") - elseif not silent then - texio.write("<",data and "+ " or "- ",fullname,">") end return data else - if trace_locating then + if not silent then report_lua("unknown file %a",filename) end return nil @@ -9955,7 +10029,7 @@ do -- create closure to overcome 200 locals limit package.loaded["lxml-lpt"] = package.loaded["lxml-lpt"] or true --- original size: 48956, stripped down to: 30516 +-- original size: 48030, stripped down to: 30595 if not modules then modules={} end modules ['lxml-lpt']={ version=1.001, @@ -10936,8 +11010,13 @@ function xml.elements(root,pattern,reverse) local collected=applylpath(root,pattern) if not collected then return dummy - elseif reverse then - local c=#collected+1 + end + local n=#collected + if n==0 then + return dummy + end + if reverse then + local c=n+1 return function() if c>1 then c=c-1 @@ -10947,7 +11026,7 @@ function xml.elements(root,pattern,reverse) end end else - local n,c=#collected,0 + local c=0 return function() if c<n then c=c+1 @@ -10962,8 +11041,13 @@ function xml.collected(root,pattern,reverse) local collected=applylpath(root,pattern) if not collected then return dummy - elseif reverse then - local c=#collected+1 + end + local n=#collected + if n==0 then + return dummy + end + if reverse then + local c=n+1 return function() if c>1 then c=c-1 @@ -10971,7 +11055,7 @@ function xml.collected(root,pattern,reverse) end end else - local n,c=#collected,0 + local c=0 return function() if c<n then c=c+1 @@ -10986,7 +11070,7 @@ function xml.inspect(collection,pattern) report_lpath("pattern: %s\n\n%s\n",pattern,xml.tostring(e)) end end -local function split(e) +local function split(e) local dt=e.dt if dt then for i=1,#dt do @@ -12326,7 +12410,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-ini"] = package.loaded["data-ini"] or true --- original size: 7898, stripped down to: 5501 +-- original size: 7927, stripped down to: 5528 if not modules then modules={} end modules ['data-ini']={ version=1.001, @@ -12470,7 +12554,7 @@ if not texroot or texroot=="" then ossetenv('TEXROOT',texroot) end environment.texroot=file.collapsepath(texroot) -if profiler then +if type(profiler)=="table" and not jit then directives.register("system.profile",function() profiler.start("luatex-profile.log") end) @@ -12488,7 +12572,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-exp"] = package.loaded["data-exp"] or true --- original size: 15303, stripped down to: 9716 +-- original size: 15317, stripped down to: 9723 if not modules then modules={} end modules ['data-exp']={ version=1.001, @@ -12610,7 +12694,7 @@ function resolvers.cleanpath(str) report_expansions("no home dir set, ignoring dependent paths") end function resolvers.cleanpath(str) - if not str or find(str,"~") then + if not str or find(str,"~",1,true) then return "" else return lpegmatch(cleanup,str) @@ -13488,7 +13572,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-met"] = package.loaded["data-met"] or true --- original size: 5453, stripped down to: 4007 +-- original size: 5460, stripped down to: 4014 if not modules then modules={} end modules ['data-met']={ version=1.100, @@ -13517,7 +13601,7 @@ local function splitmethod(filename) return filename end filename=file.collapsepath(filename,".") - if not find(filename,"://") then + if not find(filename,"://",1,true) then return { scheme="file",path=filename,original=filename,filename=filename } end local specification=url.hashed(filename) @@ -13607,7 +13691,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-res"] = package.loaded["data-res"] or true --- original size: 61799, stripped down to: 42957 +-- original size: 61824, stripped down to: 42982 if not modules then modules={} end modules ['data-res']={ version=1.001, @@ -13838,7 +13922,7 @@ local function identify_configuration_files() local realname=resolvers.resolve(filename) if trace_locating then local fullpath=gsub(resolvers.resolve(collapsepath(filepath)),"//","/") - local weirdpath=find(fullpath,"/texmf.+/texmf") or not find(fullpath,"/web2c") + local weirdpath=find(fullpath,"/texmf.+/texmf") or not find(fullpath,"/web2c",1,true) report_resolving("looking for %a on %s path %a from specification %a",luacnfname,weirdpath and "weird" or "given",fullpath,filepath) end if lfs.isfile(realname) then @@ -14427,7 +14511,7 @@ local function find_direct(filename,allresults) end end local function find_wildcard(filename,allresults) - if find(filename,'%*') then + if find(filename,'*',1,true) then if trace_locating then report_resolving("checking wildcard %a",filename) end @@ -14573,7 +14657,7 @@ local function find_intree(filename,filetype,wantedfiles,allresults) local scheme=url.hasscheme(pathname) if not scheme or scheme=="file" then local pname=gsub(pathname,"%.%*$",'') - if not find(pname,"%*") then + if not find(pname,"*",1,true) then if can_be_dir(pname) then for k=1,#wantedfiles do local w=wantedfiles[k] @@ -14842,7 +14926,7 @@ local function findwildcardfiles(filename,allresults,result) local path=lower(lpegmatch(makewildcard,dirn) or dirn) local name=lower(lpegmatch(makewildcard,base) or base) local files,done=instance.files,false - if find(name,"%*") then + if find(name,"*",1,true) then local hashes=instance.hashes for k=1,#hashes do local hash=hashes[k] @@ -15885,7 +15969,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-sch"] = package.loaded["data-sch"] or true --- original size: 6202, stripped down to: 5149 +-- original size: 6213, stripped down to: 5160 if not modules then modules={} end modules ['data-sch']={ version=1.001, @@ -15928,7 +16012,7 @@ function resolvers.schemes.cleanname(specification) end local cached,loaded,reused,thresholds,handlers={},{},{},{},{} local function runcurl(name,cachename) - local command="curl --silent --create-dirs --output "..cachename.." "..name + local command="curl --silent --insecure --create-dirs --output "..cachename.." "..name os.spawn(command) end local function fetch(specification) @@ -16791,8 +16875,8 @@ end -- of closure -- used libraries : l-lua.lua l-package.lua l-lpeg.lua l-function.lua l-string.lua l-table.lua l-io.lua l-number.lua l-set.lua l-os.lua l-file.lua l-gzip.lua l-md5.lua l-url.lua l-dir.lua l-boolean.lua l-unicode.lua l-math.lua util-str.lua util-tab.lua util-sto.lua util-prs.lua util-fmt.lua trac-set.lua trac-log.lua trac-inf.lua trac-pro.lua util-lua.lua util-deb.lua util-mrg.lua util-tpl.lua util-env.lua luat-env.lua lxml-tab.lua lxml-lpt.lua lxml-mis.lua lxml-aux.lua lxml-xml.lua trac-xml.lua data-ini.lua data-exp.lua data-env.lua data-tmp.lua data-met.lua data-res.lua data-pre.lua data-inp.lua data-out.lua data-fil.lua data-con.lua data-use.lua data-zip.lua data-tre.lua data-sch.lua data-lua.lua data-aux.lua data-tmf.lua data-lst.lua util-lib.lua luat-sta.lua luat-fmt.lua -- skipped libraries : - --- original bytes : 689993 --- stripped bytes : 244562 +-- original bytes : 694558 +-- stripped bytes : 246497 -- end library merge diff --git a/scripts/context/stubs/unix/mtxrun b/scripts/context/stubs/unix/mtxrun index 3372831b3..8679aefb1 100644 --- a/scripts/context/stubs/unix/mtxrun +++ b/scripts/context/stubs/unix/mtxrun @@ -56,7 +56,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-lua"] = package.loaded["l-lua"] or true --- original size: 3247, stripped down to: 1763 +-- original size: 3409, stripped down to: 1763 if not modules then modules={} end modules ['l-lua']={ version=1.001, @@ -1187,7 +1187,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-table"] = package.loaded["l-table"] or true --- original size: 31142, stripped down to: 20283 +-- original size: 31860, stripped down to: 20846 if not modules then modules={} end modules ['l-table']={ version=1.001, @@ -1265,6 +1265,36 @@ local function sortedkeys(tab) return {} end end +local function sortedhashonly(tab) + if tab then + local srt,s={},0 + for key,_ in next,tab do + if type(key)=="string" then + s=s+1 + srt[s]=key + end + end + sort(srt) + return srt + else + return {} + end +end +local function sortedindexonly(tab) + if tab then + local srt,s={},0 + for key,_ in next,tab do + if type(key)=="number" then + s=s+1 + srt[s]=key + end + end + sort(srt) + return srt + else + return {} + end +end local function sortedhashkeys(tab,cmp) if tab then local srt,s={},0 @@ -1290,6 +1320,8 @@ function table.allkeys(t) return sortedkeys(keys) end table.sortedkeys=sortedkeys +table.sortedhashonly=sortedhashonly +table.sortedindexonly=sortedindexonly table.sortedhashkeys=sortedhashkeys local function nothing() end local function sortedhash(t,cmp) @@ -2078,7 +2110,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-io"] = package.loaded["l-io"] or true --- original size: 8817, stripped down to: 6340 +-- original size: 8824, stripped down to: 6347 if not modules then modules={} end modules ['l-io']={ version=1.001, @@ -2092,7 +2124,7 @@ local byte,find,gsub,format=string.byte,string.find,string.gsub,string.format local concat=table.concat local floor=math.floor local type=type -if string.find(os.getenv("PATH"),";") then +if string.find(os.getenv("PATH"),";",1,true) then io.fileseparator,io.pathseparator="\\",";" else io.fileseparator,io.pathseparator="/",":" @@ -2613,7 +2645,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-os"] = package.loaded["l-os"] or true --- original size: 16023, stripped down to: 9634 +-- original size: 16093, stripped down to: 9704 if not modules then modules={} end modules ['l-os']={ version=1.001, @@ -2703,7 +2735,7 @@ function os.resultof(command) end end if not io.fileseparator then - if find(os.getenv("PATH"),";") then + if find(os.getenv("PATH"),";",1,true) then io.fileseparator,io.pathseparator,os.type="\\",";",os.type or "mswin" else io.fileseparator,io.pathseparator,os.type="/",":",os.type or "unix" @@ -2763,7 +2795,7 @@ if platform~="" then elseif os.type=="windows" then function resolvers.platform(t,k) local platform,architecture="",os.getenv("PROCESSOR_ARCHITECTURE") or "" - if find(architecture,"AMD64") then + if find(architecture,"AMD64",1,true) then platform="win64" else platform="mswin" @@ -2775,9 +2807,9 @@ elseif os.type=="windows" then elseif name=="linux" then function resolvers.platform(t,k) local platform,architecture="",os.getenv("HOSTTYPE") or os.resultof("uname -m") or "" - if find(architecture,"x86_64") then + if find(architecture,"x86_64",1,true) then platform="linux-64" - elseif find(architecture,"ppc") then + elseif find(architecture,"ppc",1,true) then platform="linux-ppc" else platform="linux" @@ -2791,9 +2823,9 @@ elseif name=="macosx" then local platform,architecture="",os.resultof("echo $HOSTTYPE") or "" if architecture=="" then platform="osx-intel" - elseif find(architecture,"i386") then + elseif find(architecture,"i386",1,true) then platform="osx-intel" - elseif find(architecture,"x86_64") then + elseif find(architecture,"x86_64",1,true) then platform="osx-64" else platform="osx-ppc" @@ -2805,7 +2837,7 @@ elseif name=="macosx" then elseif name=="sunos" then function resolvers.platform(t,k) local platform,architecture="",os.resultof("uname -m") or "" - if find(architecture,"sparc") then + if find(architecture,"sparc",1,true) then platform="solaris-sparc" else platform="solaris-intel" @@ -2817,7 +2849,7 @@ elseif name=="sunos" then elseif name=="freebsd" then function resolvers.platform(t,k) local platform,architecture="",os.resultof("uname -m") or "" - if find(architecture,"amd64") then + if find(architecture,"amd64",1,true) then platform="freebsd-amd64" else platform="freebsd" @@ -2829,7 +2861,7 @@ elseif name=="freebsd" then elseif name=="kfreebsd" then function resolvers.platform(t,k) local platform,architecture="",os.getenv("HOSTTYPE") or os.resultof("uname -m") or "" - if find(architecture,"x86_64") then + if find(architecture,"x86_64",1,true) then platform="kfreebsd-amd64" else platform="kfreebsd-i386" @@ -2847,7 +2879,7 @@ else end end function resolvers.bits(t,k) - local bits=find(os.platform,"64") and 64 or 32 + local bits=find(os.platform,"64",1,true) and 64 or 32 os.bits=bits return bits end @@ -3735,7 +3767,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-dir"] = package.loaded["l-dir"] or true --- original size: 14768, stripped down to: 9107 +-- original size: 14788, stripped down to: 9096 if not modules then modules={} end modules ['l-dir']={ version=1.001, @@ -3759,7 +3791,7 @@ local isfile=lfs.isfile local currentdir=lfs.currentdir local chdir=lfs.chdir local mkdir=lfs.mkdir -local onwindows=os.type=="windows" or find(os.getenv("PATH"),";") +local onwindows=os.type=="windows" or find(os.getenv("PATH"),";",1,true) if not isdir then function isdir(name) local a=attributes(name) @@ -3861,7 +3893,7 @@ local function glob(str,t) local split=lpegmatch(pattern,str) if split then local root,path,base=split[1],split[2],split[3] - local recurse=find(base,"%*%*") + local recurse=find(base,"**",1,true) local start=root..path local result=lpegmatch(filter,start..base) globpattern(start,result,recurse,t) @@ -3887,7 +3919,7 @@ local function glob(str,t) local t=t or {} local action=action or function(name) t[#t+1]=name end local root,path,base=split[1],split[2],split[3] - local recurse=find(base,"%*%*") + local recurse=find(base,"**",1,true) local start=root..path local result=lpegmatch(filter,start..base) globpattern(start,result,recurse,action) @@ -3942,7 +3974,6 @@ if onwindows then str="" for i=1,n do local s=select(i,...) - local s=select(i,...) if s=="" then elseif str=="" then str=s @@ -4195,7 +4226,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-unicode"] = package.loaded["l-unicode"] or true --- original size: 33473, stripped down to: 14938 +-- original size: 33706, stripped down to: 14938 if not modules then modules={} end modules ['l-unicode']={ version=1.001, @@ -4840,7 +4871,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-str"] = package.loaded["util-str"] or true --- original size: 29502, stripped down to: 16632 +-- original size: 32843, stripped down to: 18226 if not modules then modules={} end modules ['util-str']={ version=1.001, @@ -4876,9 +4907,11 @@ end if not number then number={} end local stripper=patterns.stripzeros local function points(n) + n=tonumber(n) return (not n or n==0) and "0pt" or lpegmatch(stripper,format("%.5fpt",n/65536)) end local function basepoints(n) + n=tonumber(n) return (not n or n==0) and "0bp" or lpegmatch(stripper,format("%.5fbp",n*(7200/7227)/65536)) end number.points=points @@ -4941,11 +4974,39 @@ local pattern=Carg(1)/function(t) function strings.tabtospace(str,tab) return lpegmatch(pattern,str,1,tab or 7) end -function strings.striplong(str) - str=gsub(str,"^%s*","") - str=gsub(str,"[\n\r]+ *","\n") - return str +local newline=patterns.newline +local endofstring=patterns.endofstring +local whitespace=patterns.whitespace +local spacer=patterns.spacer +local space=spacer^0 +local nospace=space/"" +local endofline=nospace*newline +local stripend=(whitespace^1*endofstring)/"" +local normalline=(nospace*((1-space*(newline+endofstring))^1)*nospace) +local stripempty=endofline^1/"" +local normalempty=endofline^1 +local singleempty=endofline*(endofline^0/"") +local doubleempty=endofline*endofline^-1*(endofline^0/"") +local stripstart=stripempty^0 +local p_prune_normal=Cs (stripstart*(stripend+normalline+normalempty )^0 ) +local p_prune_collapse=Cs (stripstart*(stripend+normalline+doubleempty )^0 ) +local p_prune_noempty=Cs (stripstart*(stripend+normalline+singleempty )^0 ) +local p_retain_normal=Cs ((normalline+normalempty )^0 ) +local p_retain_collapse=Cs ((normalline+doubleempty )^0 ) +local p_retain_noempty=Cs ((normalline+singleempty )^0 ) +local striplinepatterns={ + ["prune"]=p_prune_normal, + ["prune and collapse"]=p_prune_collapse, + ["prune and no empty"]=p_prune_noempty, + ["retain"]=p_retain_normal, + ["retain and collapse"]=p_retain_collapse, + ["retain and no empty"]=p_retain_noempty, +} +strings.striplinepatterns=striplinepatterns +function strings.striplines(str,how) + return str and lpegmatch(how and striplinepatterns[how] or p_prune_collapse,str) or str end +strings.striplong=strings.striplines function strings.nice(str) str=gsub(str,"[:%-+_]+"," ") return str @@ -5111,7 +5172,7 @@ local format_i=function(f) if f and f~="" then return format("format('%%%si',a%s)",f,n) else - return format("format('%%i',a%s)",n) + return format("format('%%i',a%s)",n) end end local format_d=format_i @@ -5123,6 +5184,10 @@ local format_f=function(f) n=n+1 return format("format('%%%sf',a%s)",f,n) end +local format_F=function(f) + n=n+1 + return format("((a%s == 0 and '0') or (a%s == 1 and '1') or format('%%%sf',a%s))",n,n,f,n) +end local format_g=function(f) n=n+1 return format("format('%%%sg',a%s)",f,n) @@ -5337,7 +5402,7 @@ local builder=Cs { "start", ( P("%")/""*( V("!") -+V("s")+V("q")+V("i")+V("d")+V("f")+V("g")+V("G")+V("e")+V("E")+V("x")+V("X")+V("o") ++V("s")+V("q")+V("i")+V("d")+V("f")+V("F")+V("g")+V("G")+V("e")+V("E")+V("x")+V("X")+V("o") +V("c")+V("C")+V("S") +V("Q") +V("N") @@ -5357,6 +5422,7 @@ local builder=Cs { "start", ["i"]=(prefix_any*P("i"))/format_i, ["d"]=(prefix_any*P("d"))/format_d, ["f"]=(prefix_any*P("f"))/format_f, + ["F"]=(prefix_any*P("F"))/format_F, ["g"]=(prefix_any*P("g"))/format_g, ["G"]=(prefix_any*P("G"))/format_G, ["e"]=(prefix_any*P("e"))/format_e, @@ -5404,7 +5470,7 @@ local function make(t,str) f=loadstripped(p)() else n=0 - p=lpegmatch(builder,str,1,"..",t._extensions_) + p=lpegmatch(builder,str,1,t._connector_,t._extensions_) if n>0 then p=format(template,preamble,t._preamble_,arguments[n],p) f=loadstripped(p,t._environment_)() @@ -5420,18 +5486,18 @@ local function use(t,fmt,...) end strings.formatters={} if _LUAVERSION<5.2 then - function strings.formatters.new() - local t={ _extensions_={},_preamble_=preamble,_environment_={},_type_="formatter" } + function strings.formatters.new(noconcat) + local t={ _type_="formatter",_connector_=noconcat and "," or "..",_extensions_={},_preamble_=preamble,_environment_={} } setmetatable(t,{ __index=make,__call=use }) return t end else - function strings.formatters.new() + function strings.formatters.new(noconcat) local e={} for k,v in next,environment do e[k]=v end - local t={ _extensions_={},_preamble_="",_environment_=e,_type_="formatter" } + local t={ _type_="formatter",_connector_=noconcat and "," or "..",_extensions_={},_preamble_="",_environment_=e } setmetatable(t,{ __index=make,__call=use }) return t end @@ -5473,7 +5539,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-tab"] = package.loaded["util-tab"] or true --- original size: 23980, stripped down to: 16119 +-- original size: 23985, stripped down to: 16069 if not modules then modules={} end modules ['util-tab']={ version=1.001, @@ -5494,27 +5560,29 @@ local sortedkeys,sortedpairs=table.sortedkeys,table.sortedpairs local formatters=string.formatters local utftoeight=utf.toeight local splitter=lpeg.tsplitat(".") -function tables.definetable(target,nofirst,nolast) - local composed,shortcut,t=nil,nil,{} +function utilities.tables.definetable(target,nofirst,nolast) + local composed,t=nil,{} local snippets=lpegmatch(splitter,target) for i=1,#snippets-(nolast and 1 or 0) do local name=snippets[i] if composed then - composed=shortcut.."."..name - shortcut=shortcut.."_"..name - t[#t+1]=formatters["local %s = %s if not %s then %s = { } %s = %s end"](shortcut,composed,shortcut,shortcut,composed,shortcut) + composed=composed.."."..name + t[#t+1]=formatters["if not %s then %s = { } end"](composed,composed) else composed=name - shortcut=name if not nofirst then t[#t+1]=formatters["%s = %s or { }"](composed,composed) end end end - if nolast then - composed=shortcut.."."..snippets[#snippets] + if composed then + if nolast then + composed=composed.."."..snippets[#snippets] + end + return concat(t,"\n"),composed + else + return "",target end - return concat(t,"\n"),composed end function tables.definedtable(...) local t=_G @@ -5541,7 +5609,7 @@ function tables.accesstable(target,root) end function tables.migratetable(target,v,root) local t=root or _G - local names=string.split(target,".") + local names=lpegmatch(splitter,target) for i=1,#names-1 do local name=names[i] t[name]=t[name] or {} @@ -6230,7 +6298,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-prs"] = package.loaded["util-prs"] or true --- original size: 19604, stripped down to: 13998 +-- original size: 19618, stripped down to: 14012 if not modules then modules={} end modules ['util-prs']={ version=1.001, @@ -6375,12 +6443,12 @@ function parsers.settings_to_array(str,strict) elseif not str or str=="" then return {} elseif strict then - if find(str,"{") then + if find(str,"{",1,true) then return lpegmatch(pattern,str) else return { str } end - elseif find(str,",") then + elseif find(str,",",1,true) then return lpegmatch(pattern,str) else return { str } @@ -7112,7 +7180,7 @@ do -- create closure to overcome 200 locals limit package.loaded["trac-log"] = package.loaded["trac-log"] or true --- original size: 25391, stripped down to: 16561 +-- original size: 25607, stripped down to: 16617 if not modules then modules={} end modules ['trac-log']={ version=1.001, @@ -7466,9 +7534,10 @@ local function setblocked(category,value) v.state=value end else - states=utilities.parsers.settings_to_hash(category) + states=utilities.parsers.settings_to_hash(category,type(states)=="table" and states or nil) for c,_ in next,states do - if data[c] then + local v=data[c] + if v then v.state=value else c=topattern(c,true,true) @@ -7747,7 +7816,7 @@ do -- create closure to overcome 200 locals limit package.loaded["trac-inf"] = package.loaded["trac-inf"] or true --- original size: 6643, stripped down to: 5272 +-- original size: 7011, stripped down to: 5590 if not modules then modules={} end modules ['trac-inf']={ version=1.001, @@ -7757,7 +7826,7 @@ if not modules then modules={} end modules ['trac-inf']={ license="see context related readme files" } local type,tonumber,select=type,tonumber,select -local format,lower=string.format,string.lower +local format,lower,find=string.format,string.lower,string.find local concat=table.concat local clock=os.gettimeofday or os.clock local setmetatableindex=table.setmetatableindex @@ -7848,10 +7917,8 @@ function statistics.show() if statistics.enable then local register=statistics.register register("used platform",function() - local mask=lua.mask or "ascii" - return format("%s, type: %s, binary subtree: %s, symbol mask: %s (%s)", - os.platform or "unknown",os.type or "unknown",environment.texos or "unknown", - mask,mask=="utf" and "τεχ" or "tex") + return format("%s, type: %s, binary subtree: %s", + os.platform or "unknown",os.type or "unknown",environment.texos or "unknown") end) register("luatex banner",function() return lower(status.banner) @@ -7864,14 +7931,23 @@ function statistics.show() return format("%s direct, %s indirect, %s total",total-indirect,indirect,total) end) if jit then - local status={ jit.status() } - if status[1] then - register("luajit status",function() - return concat(status," ",2) - end) + local jitstatus={ jit.status() } + if jitstatus[1] then + register("luajit options",concat(jitstatus," ",2)) end end - register("current memory usage",statistics.memused) + register("lua properties",function() + local list=status.list() + local hashchar=tonumber(list.luatex_hashchars) + local mask=lua.mask or "ascii" + return format("engine: %s, used memory: %s, hash type: %s, hash chars: min(%s,40), symbol mask: %s (%s)", + jit and "luajit" or "lua", + statistics.memused(), + list.luatex_hashtype or "default", + hashchar and 2^hashchar or "unknown", + mask, + mask=="utf" and "τεχ" or "tex") + end) register("runtime",statistics.runtime) logs.newline() for i=1,#statusinfo do @@ -8616,7 +8692,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-env"] = package.loaded["util-env"] or true --- original size: 8807, stripped down to: 5085 +-- original size: 8814, stripped down to: 5092 if not modules then modules={} end modules ['util-env']={ version=1.001, @@ -8753,7 +8829,7 @@ function environment.reconstructcommandline(arg,noquote) a=resolvers.resolve(a) a=unquoted(a) a=gsub(a,'"','\\"') - if find(a," ") then + if find(a," ",1,true) then result[#result+1]=quoted(a) else result[#result+1]=a @@ -8813,7 +8889,7 @@ do -- create closure to overcome 200 locals limit package.loaded["luat-env"] = package.loaded["luat-env"] or true --- original size: 5930, stripped down to: 4235 +-- original size: 6174, stripped down to: 4141 if not modules then modules={} end modules ['luat-env']={ version=1.001, @@ -8891,15 +8967,13 @@ function environment.luafilechunk(filename,silent) filename=file.replacesuffix(filename,"lua") local fullname=environment.luafile(filename) if fullname and fullname~="" then - local data=luautilities.loadedluacode(fullname,strippable,filename) - if trace_locating then + local data=luautilities.loadedluacode(fullname,strippable,filename) + if not silent then report_lua("loading file %a %s",fullname,not data and "failed" or "succeeded") - elseif not silent then - texio.write("<",data and "+ " or "- ",fullname,">") end return data else - if trace_locating then + if not silent then report_lua("unknown file %a",filename) end return nil @@ -9955,7 +10029,7 @@ do -- create closure to overcome 200 locals limit package.loaded["lxml-lpt"] = package.loaded["lxml-lpt"] or true --- original size: 48956, stripped down to: 30516 +-- original size: 48030, stripped down to: 30595 if not modules then modules={} end modules ['lxml-lpt']={ version=1.001, @@ -10936,8 +11010,13 @@ function xml.elements(root,pattern,reverse) local collected=applylpath(root,pattern) if not collected then return dummy - elseif reverse then - local c=#collected+1 + end + local n=#collected + if n==0 then + return dummy + end + if reverse then + local c=n+1 return function() if c>1 then c=c-1 @@ -10947,7 +11026,7 @@ function xml.elements(root,pattern,reverse) end end else - local n,c=#collected,0 + local c=0 return function() if c<n then c=c+1 @@ -10962,8 +11041,13 @@ function xml.collected(root,pattern,reverse) local collected=applylpath(root,pattern) if not collected then return dummy - elseif reverse then - local c=#collected+1 + end + local n=#collected + if n==0 then + return dummy + end + if reverse then + local c=n+1 return function() if c>1 then c=c-1 @@ -10971,7 +11055,7 @@ function xml.collected(root,pattern,reverse) end end else - local n,c=#collected,0 + local c=0 return function() if c<n then c=c+1 @@ -10986,7 +11070,7 @@ function xml.inspect(collection,pattern) report_lpath("pattern: %s\n\n%s\n",pattern,xml.tostring(e)) end end -local function split(e) +local function split(e) local dt=e.dt if dt then for i=1,#dt do @@ -12326,7 +12410,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-ini"] = package.loaded["data-ini"] or true --- original size: 7898, stripped down to: 5501 +-- original size: 7927, stripped down to: 5528 if not modules then modules={} end modules ['data-ini']={ version=1.001, @@ -12470,7 +12554,7 @@ if not texroot or texroot=="" then ossetenv('TEXROOT',texroot) end environment.texroot=file.collapsepath(texroot) -if profiler then +if type(profiler)=="table" and not jit then directives.register("system.profile",function() profiler.start("luatex-profile.log") end) @@ -12488,7 +12572,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-exp"] = package.loaded["data-exp"] or true --- original size: 15303, stripped down to: 9716 +-- original size: 15317, stripped down to: 9723 if not modules then modules={} end modules ['data-exp']={ version=1.001, @@ -12610,7 +12694,7 @@ function resolvers.cleanpath(str) report_expansions("no home dir set, ignoring dependent paths") end function resolvers.cleanpath(str) - if not str or find(str,"~") then + if not str or find(str,"~",1,true) then return "" else return lpegmatch(cleanup,str) @@ -13488,7 +13572,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-met"] = package.loaded["data-met"] or true --- original size: 5453, stripped down to: 4007 +-- original size: 5460, stripped down to: 4014 if not modules then modules={} end modules ['data-met']={ version=1.100, @@ -13517,7 +13601,7 @@ local function splitmethod(filename) return filename end filename=file.collapsepath(filename,".") - if not find(filename,"://") then + if not find(filename,"://",1,true) then return { scheme="file",path=filename,original=filename,filename=filename } end local specification=url.hashed(filename) @@ -13607,7 +13691,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-res"] = package.loaded["data-res"] or true --- original size: 61799, stripped down to: 42957 +-- original size: 61824, stripped down to: 42982 if not modules then modules={} end modules ['data-res']={ version=1.001, @@ -13838,7 +13922,7 @@ local function identify_configuration_files() local realname=resolvers.resolve(filename) if trace_locating then local fullpath=gsub(resolvers.resolve(collapsepath(filepath)),"//","/") - local weirdpath=find(fullpath,"/texmf.+/texmf") or not find(fullpath,"/web2c") + local weirdpath=find(fullpath,"/texmf.+/texmf") or not find(fullpath,"/web2c",1,true) report_resolving("looking for %a on %s path %a from specification %a",luacnfname,weirdpath and "weird" or "given",fullpath,filepath) end if lfs.isfile(realname) then @@ -14427,7 +14511,7 @@ local function find_direct(filename,allresults) end end local function find_wildcard(filename,allresults) - if find(filename,'%*') then + if find(filename,'*',1,true) then if trace_locating then report_resolving("checking wildcard %a",filename) end @@ -14573,7 +14657,7 @@ local function find_intree(filename,filetype,wantedfiles,allresults) local scheme=url.hasscheme(pathname) if not scheme or scheme=="file" then local pname=gsub(pathname,"%.%*$",'') - if not find(pname,"%*") then + if not find(pname,"*",1,true) then if can_be_dir(pname) then for k=1,#wantedfiles do local w=wantedfiles[k] @@ -14842,7 +14926,7 @@ local function findwildcardfiles(filename,allresults,result) local path=lower(lpegmatch(makewildcard,dirn) or dirn) local name=lower(lpegmatch(makewildcard,base) or base) local files,done=instance.files,false - if find(name,"%*") then + if find(name,"*",1,true) then local hashes=instance.hashes for k=1,#hashes do local hash=hashes[k] @@ -15885,7 +15969,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-sch"] = package.loaded["data-sch"] or true --- original size: 6202, stripped down to: 5149 +-- original size: 6213, stripped down to: 5160 if not modules then modules={} end modules ['data-sch']={ version=1.001, @@ -15928,7 +16012,7 @@ function resolvers.schemes.cleanname(specification) end local cached,loaded,reused,thresholds,handlers={},{},{},{},{} local function runcurl(name,cachename) - local command="curl --silent --create-dirs --output "..cachename.." "..name + local command="curl --silent --insecure --create-dirs --output "..cachename.." "..name os.spawn(command) end local function fetch(specification) @@ -16791,8 +16875,8 @@ end -- of closure -- used libraries : l-lua.lua l-package.lua l-lpeg.lua l-function.lua l-string.lua l-table.lua l-io.lua l-number.lua l-set.lua l-os.lua l-file.lua l-gzip.lua l-md5.lua l-url.lua l-dir.lua l-boolean.lua l-unicode.lua l-math.lua util-str.lua util-tab.lua util-sto.lua util-prs.lua util-fmt.lua trac-set.lua trac-log.lua trac-inf.lua trac-pro.lua util-lua.lua util-deb.lua util-mrg.lua util-tpl.lua util-env.lua luat-env.lua lxml-tab.lua lxml-lpt.lua lxml-mis.lua lxml-aux.lua lxml-xml.lua trac-xml.lua data-ini.lua data-exp.lua data-env.lua data-tmp.lua data-met.lua data-res.lua data-pre.lua data-inp.lua data-out.lua data-fil.lua data-con.lua data-use.lua data-zip.lua data-tre.lua data-sch.lua data-lua.lua data-aux.lua data-tmf.lua data-lst.lua util-lib.lua luat-sta.lua luat-fmt.lua -- skipped libraries : - --- original bytes : 689993 --- stripped bytes : 244562 +-- original bytes : 694558 +-- stripped bytes : 246497 -- end library merge diff --git a/scripts/context/stubs/win64/mtxrun.lua b/scripts/context/stubs/win64/mtxrun.lua index 3372831b3..8679aefb1 100644 --- a/scripts/context/stubs/win64/mtxrun.lua +++ b/scripts/context/stubs/win64/mtxrun.lua @@ -56,7 +56,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-lua"] = package.loaded["l-lua"] or true --- original size: 3247, stripped down to: 1763 +-- original size: 3409, stripped down to: 1763 if not modules then modules={} end modules ['l-lua']={ version=1.001, @@ -1187,7 +1187,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-table"] = package.loaded["l-table"] or true --- original size: 31142, stripped down to: 20283 +-- original size: 31860, stripped down to: 20846 if not modules then modules={} end modules ['l-table']={ version=1.001, @@ -1265,6 +1265,36 @@ local function sortedkeys(tab) return {} end end +local function sortedhashonly(tab) + if tab then + local srt,s={},0 + for key,_ in next,tab do + if type(key)=="string" then + s=s+1 + srt[s]=key + end + end + sort(srt) + return srt + else + return {} + end +end +local function sortedindexonly(tab) + if tab then + local srt,s={},0 + for key,_ in next,tab do + if type(key)=="number" then + s=s+1 + srt[s]=key + end + end + sort(srt) + return srt + else + return {} + end +end local function sortedhashkeys(tab,cmp) if tab then local srt,s={},0 @@ -1290,6 +1320,8 @@ function table.allkeys(t) return sortedkeys(keys) end table.sortedkeys=sortedkeys +table.sortedhashonly=sortedhashonly +table.sortedindexonly=sortedindexonly table.sortedhashkeys=sortedhashkeys local function nothing() end local function sortedhash(t,cmp) @@ -2078,7 +2110,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-io"] = package.loaded["l-io"] or true --- original size: 8817, stripped down to: 6340 +-- original size: 8824, stripped down to: 6347 if not modules then modules={} end modules ['l-io']={ version=1.001, @@ -2092,7 +2124,7 @@ local byte,find,gsub,format=string.byte,string.find,string.gsub,string.format local concat=table.concat local floor=math.floor local type=type -if string.find(os.getenv("PATH"),";") then +if string.find(os.getenv("PATH"),";",1,true) then io.fileseparator,io.pathseparator="\\",";" else io.fileseparator,io.pathseparator="/",":" @@ -2613,7 +2645,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-os"] = package.loaded["l-os"] or true --- original size: 16023, stripped down to: 9634 +-- original size: 16093, stripped down to: 9704 if not modules then modules={} end modules ['l-os']={ version=1.001, @@ -2703,7 +2735,7 @@ function os.resultof(command) end end if not io.fileseparator then - if find(os.getenv("PATH"),";") then + if find(os.getenv("PATH"),";",1,true) then io.fileseparator,io.pathseparator,os.type="\\",";",os.type or "mswin" else io.fileseparator,io.pathseparator,os.type="/",":",os.type or "unix" @@ -2763,7 +2795,7 @@ if platform~="" then elseif os.type=="windows" then function resolvers.platform(t,k) local platform,architecture="",os.getenv("PROCESSOR_ARCHITECTURE") or "" - if find(architecture,"AMD64") then + if find(architecture,"AMD64",1,true) then platform="win64" else platform="mswin" @@ -2775,9 +2807,9 @@ elseif os.type=="windows" then elseif name=="linux" then function resolvers.platform(t,k) local platform,architecture="",os.getenv("HOSTTYPE") or os.resultof("uname -m") or "" - if find(architecture,"x86_64") then + if find(architecture,"x86_64",1,true) then platform="linux-64" - elseif find(architecture,"ppc") then + elseif find(architecture,"ppc",1,true) then platform="linux-ppc" else platform="linux" @@ -2791,9 +2823,9 @@ elseif name=="macosx" then local platform,architecture="",os.resultof("echo $HOSTTYPE") or "" if architecture=="" then platform="osx-intel" - elseif find(architecture,"i386") then + elseif find(architecture,"i386",1,true) then platform="osx-intel" - elseif find(architecture,"x86_64") then + elseif find(architecture,"x86_64",1,true) then platform="osx-64" else platform="osx-ppc" @@ -2805,7 +2837,7 @@ elseif name=="macosx" then elseif name=="sunos" then function resolvers.platform(t,k) local platform,architecture="",os.resultof("uname -m") or "" - if find(architecture,"sparc") then + if find(architecture,"sparc",1,true) then platform="solaris-sparc" else platform="solaris-intel" @@ -2817,7 +2849,7 @@ elseif name=="sunos" then elseif name=="freebsd" then function resolvers.platform(t,k) local platform,architecture="",os.resultof("uname -m") or "" - if find(architecture,"amd64") then + if find(architecture,"amd64",1,true) then platform="freebsd-amd64" else platform="freebsd" @@ -2829,7 +2861,7 @@ elseif name=="freebsd" then elseif name=="kfreebsd" then function resolvers.platform(t,k) local platform,architecture="",os.getenv("HOSTTYPE") or os.resultof("uname -m") or "" - if find(architecture,"x86_64") then + if find(architecture,"x86_64",1,true) then platform="kfreebsd-amd64" else platform="kfreebsd-i386" @@ -2847,7 +2879,7 @@ else end end function resolvers.bits(t,k) - local bits=find(os.platform,"64") and 64 or 32 + local bits=find(os.platform,"64",1,true) and 64 or 32 os.bits=bits return bits end @@ -3735,7 +3767,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-dir"] = package.loaded["l-dir"] or true --- original size: 14768, stripped down to: 9107 +-- original size: 14788, stripped down to: 9096 if not modules then modules={} end modules ['l-dir']={ version=1.001, @@ -3759,7 +3791,7 @@ local isfile=lfs.isfile local currentdir=lfs.currentdir local chdir=lfs.chdir local mkdir=lfs.mkdir -local onwindows=os.type=="windows" or find(os.getenv("PATH"),";") +local onwindows=os.type=="windows" or find(os.getenv("PATH"),";",1,true) if not isdir then function isdir(name) local a=attributes(name) @@ -3861,7 +3893,7 @@ local function glob(str,t) local split=lpegmatch(pattern,str) if split then local root,path,base=split[1],split[2],split[3] - local recurse=find(base,"%*%*") + local recurse=find(base,"**",1,true) local start=root..path local result=lpegmatch(filter,start..base) globpattern(start,result,recurse,t) @@ -3887,7 +3919,7 @@ local function glob(str,t) local t=t or {} local action=action or function(name) t[#t+1]=name end local root,path,base=split[1],split[2],split[3] - local recurse=find(base,"%*%*") + local recurse=find(base,"**",1,true) local start=root..path local result=lpegmatch(filter,start..base) globpattern(start,result,recurse,action) @@ -3942,7 +3974,6 @@ if onwindows then str="" for i=1,n do local s=select(i,...) - local s=select(i,...) if s=="" then elseif str=="" then str=s @@ -4195,7 +4226,7 @@ do -- create closure to overcome 200 locals limit package.loaded["l-unicode"] = package.loaded["l-unicode"] or true --- original size: 33473, stripped down to: 14938 +-- original size: 33706, stripped down to: 14938 if not modules then modules={} end modules ['l-unicode']={ version=1.001, @@ -4840,7 +4871,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-str"] = package.loaded["util-str"] or true --- original size: 29502, stripped down to: 16632 +-- original size: 32843, stripped down to: 18226 if not modules then modules={} end modules ['util-str']={ version=1.001, @@ -4876,9 +4907,11 @@ end if not number then number={} end local stripper=patterns.stripzeros local function points(n) + n=tonumber(n) return (not n or n==0) and "0pt" or lpegmatch(stripper,format("%.5fpt",n/65536)) end local function basepoints(n) + n=tonumber(n) return (not n or n==0) and "0bp" or lpegmatch(stripper,format("%.5fbp",n*(7200/7227)/65536)) end number.points=points @@ -4941,11 +4974,39 @@ local pattern=Carg(1)/function(t) function strings.tabtospace(str,tab) return lpegmatch(pattern,str,1,tab or 7) end -function strings.striplong(str) - str=gsub(str,"^%s*","") - str=gsub(str,"[\n\r]+ *","\n") - return str +local newline=patterns.newline +local endofstring=patterns.endofstring +local whitespace=patterns.whitespace +local spacer=patterns.spacer +local space=spacer^0 +local nospace=space/"" +local endofline=nospace*newline +local stripend=(whitespace^1*endofstring)/"" +local normalline=(nospace*((1-space*(newline+endofstring))^1)*nospace) +local stripempty=endofline^1/"" +local normalempty=endofline^1 +local singleempty=endofline*(endofline^0/"") +local doubleempty=endofline*endofline^-1*(endofline^0/"") +local stripstart=stripempty^0 +local p_prune_normal=Cs (stripstart*(stripend+normalline+normalempty )^0 ) +local p_prune_collapse=Cs (stripstart*(stripend+normalline+doubleempty )^0 ) +local p_prune_noempty=Cs (stripstart*(stripend+normalline+singleempty )^0 ) +local p_retain_normal=Cs ((normalline+normalempty )^0 ) +local p_retain_collapse=Cs ((normalline+doubleempty )^0 ) +local p_retain_noempty=Cs ((normalline+singleempty )^0 ) +local striplinepatterns={ + ["prune"]=p_prune_normal, + ["prune and collapse"]=p_prune_collapse, + ["prune and no empty"]=p_prune_noempty, + ["retain"]=p_retain_normal, + ["retain and collapse"]=p_retain_collapse, + ["retain and no empty"]=p_retain_noempty, +} +strings.striplinepatterns=striplinepatterns +function strings.striplines(str,how) + return str and lpegmatch(how and striplinepatterns[how] or p_prune_collapse,str) or str end +strings.striplong=strings.striplines function strings.nice(str) str=gsub(str,"[:%-+_]+"," ") return str @@ -5111,7 +5172,7 @@ local format_i=function(f) if f and f~="" then return format("format('%%%si',a%s)",f,n) else - return format("format('%%i',a%s)",n) + return format("format('%%i',a%s)",n) end end local format_d=format_i @@ -5123,6 +5184,10 @@ local format_f=function(f) n=n+1 return format("format('%%%sf',a%s)",f,n) end +local format_F=function(f) + n=n+1 + return format("((a%s == 0 and '0') or (a%s == 1 and '1') or format('%%%sf',a%s))",n,n,f,n) +end local format_g=function(f) n=n+1 return format("format('%%%sg',a%s)",f,n) @@ -5337,7 +5402,7 @@ local builder=Cs { "start", ( P("%")/""*( V("!") -+V("s")+V("q")+V("i")+V("d")+V("f")+V("g")+V("G")+V("e")+V("E")+V("x")+V("X")+V("o") ++V("s")+V("q")+V("i")+V("d")+V("f")+V("F")+V("g")+V("G")+V("e")+V("E")+V("x")+V("X")+V("o") +V("c")+V("C")+V("S") +V("Q") +V("N") @@ -5357,6 +5422,7 @@ local builder=Cs { "start", ["i"]=(prefix_any*P("i"))/format_i, ["d"]=(prefix_any*P("d"))/format_d, ["f"]=(prefix_any*P("f"))/format_f, + ["F"]=(prefix_any*P("F"))/format_F, ["g"]=(prefix_any*P("g"))/format_g, ["G"]=(prefix_any*P("G"))/format_G, ["e"]=(prefix_any*P("e"))/format_e, @@ -5404,7 +5470,7 @@ local function make(t,str) f=loadstripped(p)() else n=0 - p=lpegmatch(builder,str,1,"..",t._extensions_) + p=lpegmatch(builder,str,1,t._connector_,t._extensions_) if n>0 then p=format(template,preamble,t._preamble_,arguments[n],p) f=loadstripped(p,t._environment_)() @@ -5420,18 +5486,18 @@ local function use(t,fmt,...) end strings.formatters={} if _LUAVERSION<5.2 then - function strings.formatters.new() - local t={ _extensions_={},_preamble_=preamble,_environment_={},_type_="formatter" } + function strings.formatters.new(noconcat) + local t={ _type_="formatter",_connector_=noconcat and "," or "..",_extensions_={},_preamble_=preamble,_environment_={} } setmetatable(t,{ __index=make,__call=use }) return t end else - function strings.formatters.new() + function strings.formatters.new(noconcat) local e={} for k,v in next,environment do e[k]=v end - local t={ _extensions_={},_preamble_="",_environment_=e,_type_="formatter" } + local t={ _type_="formatter",_connector_=noconcat and "," or "..",_extensions_={},_preamble_="",_environment_=e } setmetatable(t,{ __index=make,__call=use }) return t end @@ -5473,7 +5539,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-tab"] = package.loaded["util-tab"] or true --- original size: 23980, stripped down to: 16119 +-- original size: 23985, stripped down to: 16069 if not modules then modules={} end modules ['util-tab']={ version=1.001, @@ -5494,27 +5560,29 @@ local sortedkeys,sortedpairs=table.sortedkeys,table.sortedpairs local formatters=string.formatters local utftoeight=utf.toeight local splitter=lpeg.tsplitat(".") -function tables.definetable(target,nofirst,nolast) - local composed,shortcut,t=nil,nil,{} +function utilities.tables.definetable(target,nofirst,nolast) + local composed,t=nil,{} local snippets=lpegmatch(splitter,target) for i=1,#snippets-(nolast and 1 or 0) do local name=snippets[i] if composed then - composed=shortcut.."."..name - shortcut=shortcut.."_"..name - t[#t+1]=formatters["local %s = %s if not %s then %s = { } %s = %s end"](shortcut,composed,shortcut,shortcut,composed,shortcut) + composed=composed.."."..name + t[#t+1]=formatters["if not %s then %s = { } end"](composed,composed) else composed=name - shortcut=name if not nofirst then t[#t+1]=formatters["%s = %s or { }"](composed,composed) end end end - if nolast then - composed=shortcut.."."..snippets[#snippets] + if composed then + if nolast then + composed=composed.."."..snippets[#snippets] + end + return concat(t,"\n"),composed + else + return "",target end - return concat(t,"\n"),composed end function tables.definedtable(...) local t=_G @@ -5541,7 +5609,7 @@ function tables.accesstable(target,root) end function tables.migratetable(target,v,root) local t=root or _G - local names=string.split(target,".") + local names=lpegmatch(splitter,target) for i=1,#names-1 do local name=names[i] t[name]=t[name] or {} @@ -6230,7 +6298,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-prs"] = package.loaded["util-prs"] or true --- original size: 19604, stripped down to: 13998 +-- original size: 19618, stripped down to: 14012 if not modules then modules={} end modules ['util-prs']={ version=1.001, @@ -6375,12 +6443,12 @@ function parsers.settings_to_array(str,strict) elseif not str or str=="" then return {} elseif strict then - if find(str,"{") then + if find(str,"{",1,true) then return lpegmatch(pattern,str) else return { str } end - elseif find(str,",") then + elseif find(str,",",1,true) then return lpegmatch(pattern,str) else return { str } @@ -7112,7 +7180,7 @@ do -- create closure to overcome 200 locals limit package.loaded["trac-log"] = package.loaded["trac-log"] or true --- original size: 25391, stripped down to: 16561 +-- original size: 25607, stripped down to: 16617 if not modules then modules={} end modules ['trac-log']={ version=1.001, @@ -7466,9 +7534,10 @@ local function setblocked(category,value) v.state=value end else - states=utilities.parsers.settings_to_hash(category) + states=utilities.parsers.settings_to_hash(category,type(states)=="table" and states or nil) for c,_ in next,states do - if data[c] then + local v=data[c] + if v then v.state=value else c=topattern(c,true,true) @@ -7747,7 +7816,7 @@ do -- create closure to overcome 200 locals limit package.loaded["trac-inf"] = package.loaded["trac-inf"] or true --- original size: 6643, stripped down to: 5272 +-- original size: 7011, stripped down to: 5590 if not modules then modules={} end modules ['trac-inf']={ version=1.001, @@ -7757,7 +7826,7 @@ if not modules then modules={} end modules ['trac-inf']={ license="see context related readme files" } local type,tonumber,select=type,tonumber,select -local format,lower=string.format,string.lower +local format,lower,find=string.format,string.lower,string.find local concat=table.concat local clock=os.gettimeofday or os.clock local setmetatableindex=table.setmetatableindex @@ -7848,10 +7917,8 @@ function statistics.show() if statistics.enable then local register=statistics.register register("used platform",function() - local mask=lua.mask or "ascii" - return format("%s, type: %s, binary subtree: %s, symbol mask: %s (%s)", - os.platform or "unknown",os.type or "unknown",environment.texos or "unknown", - mask,mask=="utf" and "τεχ" or "tex") + return format("%s, type: %s, binary subtree: %s", + os.platform or "unknown",os.type or "unknown",environment.texos or "unknown") end) register("luatex banner",function() return lower(status.banner) @@ -7864,14 +7931,23 @@ function statistics.show() return format("%s direct, %s indirect, %s total",total-indirect,indirect,total) end) if jit then - local status={ jit.status() } - if status[1] then - register("luajit status",function() - return concat(status," ",2) - end) + local jitstatus={ jit.status() } + if jitstatus[1] then + register("luajit options",concat(jitstatus," ",2)) end end - register("current memory usage",statistics.memused) + register("lua properties",function() + local list=status.list() + local hashchar=tonumber(list.luatex_hashchars) + local mask=lua.mask or "ascii" + return format("engine: %s, used memory: %s, hash type: %s, hash chars: min(%s,40), symbol mask: %s (%s)", + jit and "luajit" or "lua", + statistics.memused(), + list.luatex_hashtype or "default", + hashchar and 2^hashchar or "unknown", + mask, + mask=="utf" and "τεχ" or "tex") + end) register("runtime",statistics.runtime) logs.newline() for i=1,#statusinfo do @@ -8616,7 +8692,7 @@ do -- create closure to overcome 200 locals limit package.loaded["util-env"] = package.loaded["util-env"] or true --- original size: 8807, stripped down to: 5085 +-- original size: 8814, stripped down to: 5092 if not modules then modules={} end modules ['util-env']={ version=1.001, @@ -8753,7 +8829,7 @@ function environment.reconstructcommandline(arg,noquote) a=resolvers.resolve(a) a=unquoted(a) a=gsub(a,'"','\\"') - if find(a," ") then + if find(a," ",1,true) then result[#result+1]=quoted(a) else result[#result+1]=a @@ -8813,7 +8889,7 @@ do -- create closure to overcome 200 locals limit package.loaded["luat-env"] = package.loaded["luat-env"] or true --- original size: 5930, stripped down to: 4235 +-- original size: 6174, stripped down to: 4141 if not modules then modules={} end modules ['luat-env']={ version=1.001, @@ -8891,15 +8967,13 @@ function environment.luafilechunk(filename,silent) filename=file.replacesuffix(filename,"lua") local fullname=environment.luafile(filename) if fullname and fullname~="" then - local data=luautilities.loadedluacode(fullname,strippable,filename) - if trace_locating then + local data=luautilities.loadedluacode(fullname,strippable,filename) + if not silent then report_lua("loading file %a %s",fullname,not data and "failed" or "succeeded") - elseif not silent then - texio.write("<",data and "+ " or "- ",fullname,">") end return data else - if trace_locating then + if not silent then report_lua("unknown file %a",filename) end return nil @@ -9955,7 +10029,7 @@ do -- create closure to overcome 200 locals limit package.loaded["lxml-lpt"] = package.loaded["lxml-lpt"] or true --- original size: 48956, stripped down to: 30516 +-- original size: 48030, stripped down to: 30595 if not modules then modules={} end modules ['lxml-lpt']={ version=1.001, @@ -10936,8 +11010,13 @@ function xml.elements(root,pattern,reverse) local collected=applylpath(root,pattern) if not collected then return dummy - elseif reverse then - local c=#collected+1 + end + local n=#collected + if n==0 then + return dummy + end + if reverse then + local c=n+1 return function() if c>1 then c=c-1 @@ -10947,7 +11026,7 @@ function xml.elements(root,pattern,reverse) end end else - local n,c=#collected,0 + local c=0 return function() if c<n then c=c+1 @@ -10962,8 +11041,13 @@ function xml.collected(root,pattern,reverse) local collected=applylpath(root,pattern) if not collected then return dummy - elseif reverse then - local c=#collected+1 + end + local n=#collected + if n==0 then + return dummy + end + if reverse then + local c=n+1 return function() if c>1 then c=c-1 @@ -10971,7 +11055,7 @@ function xml.collected(root,pattern,reverse) end end else - local n,c=#collected,0 + local c=0 return function() if c<n then c=c+1 @@ -10986,7 +11070,7 @@ function xml.inspect(collection,pattern) report_lpath("pattern: %s\n\n%s\n",pattern,xml.tostring(e)) end end -local function split(e) +local function split(e) local dt=e.dt if dt then for i=1,#dt do @@ -12326,7 +12410,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-ini"] = package.loaded["data-ini"] or true --- original size: 7898, stripped down to: 5501 +-- original size: 7927, stripped down to: 5528 if not modules then modules={} end modules ['data-ini']={ version=1.001, @@ -12470,7 +12554,7 @@ if not texroot or texroot=="" then ossetenv('TEXROOT',texroot) end environment.texroot=file.collapsepath(texroot) -if profiler then +if type(profiler)=="table" and not jit then directives.register("system.profile",function() profiler.start("luatex-profile.log") end) @@ -12488,7 +12572,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-exp"] = package.loaded["data-exp"] or true --- original size: 15303, stripped down to: 9716 +-- original size: 15317, stripped down to: 9723 if not modules then modules={} end modules ['data-exp']={ version=1.001, @@ -12610,7 +12694,7 @@ function resolvers.cleanpath(str) report_expansions("no home dir set, ignoring dependent paths") end function resolvers.cleanpath(str) - if not str or find(str,"~") then + if not str or find(str,"~",1,true) then return "" else return lpegmatch(cleanup,str) @@ -13488,7 +13572,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-met"] = package.loaded["data-met"] or true --- original size: 5453, stripped down to: 4007 +-- original size: 5460, stripped down to: 4014 if not modules then modules={} end modules ['data-met']={ version=1.100, @@ -13517,7 +13601,7 @@ local function splitmethod(filename) return filename end filename=file.collapsepath(filename,".") - if not find(filename,"://") then + if not find(filename,"://",1,true) then return { scheme="file",path=filename,original=filename,filename=filename } end local specification=url.hashed(filename) @@ -13607,7 +13691,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-res"] = package.loaded["data-res"] or true --- original size: 61799, stripped down to: 42957 +-- original size: 61824, stripped down to: 42982 if not modules then modules={} end modules ['data-res']={ version=1.001, @@ -13838,7 +13922,7 @@ local function identify_configuration_files() local realname=resolvers.resolve(filename) if trace_locating then local fullpath=gsub(resolvers.resolve(collapsepath(filepath)),"//","/") - local weirdpath=find(fullpath,"/texmf.+/texmf") or not find(fullpath,"/web2c") + local weirdpath=find(fullpath,"/texmf.+/texmf") or not find(fullpath,"/web2c",1,true) report_resolving("looking for %a on %s path %a from specification %a",luacnfname,weirdpath and "weird" or "given",fullpath,filepath) end if lfs.isfile(realname) then @@ -14427,7 +14511,7 @@ local function find_direct(filename,allresults) end end local function find_wildcard(filename,allresults) - if find(filename,'%*') then + if find(filename,'*',1,true) then if trace_locating then report_resolving("checking wildcard %a",filename) end @@ -14573,7 +14657,7 @@ local function find_intree(filename,filetype,wantedfiles,allresults) local scheme=url.hasscheme(pathname) if not scheme or scheme=="file" then local pname=gsub(pathname,"%.%*$",'') - if not find(pname,"%*") then + if not find(pname,"*",1,true) then if can_be_dir(pname) then for k=1,#wantedfiles do local w=wantedfiles[k] @@ -14842,7 +14926,7 @@ local function findwildcardfiles(filename,allresults,result) local path=lower(lpegmatch(makewildcard,dirn) or dirn) local name=lower(lpegmatch(makewildcard,base) or base) local files,done=instance.files,false - if find(name,"%*") then + if find(name,"*",1,true) then local hashes=instance.hashes for k=1,#hashes do local hash=hashes[k] @@ -15885,7 +15969,7 @@ do -- create closure to overcome 200 locals limit package.loaded["data-sch"] = package.loaded["data-sch"] or true --- original size: 6202, stripped down to: 5149 +-- original size: 6213, stripped down to: 5160 if not modules then modules={} end modules ['data-sch']={ version=1.001, @@ -15928,7 +16012,7 @@ function resolvers.schemes.cleanname(specification) end local cached,loaded,reused,thresholds,handlers={},{},{},{},{} local function runcurl(name,cachename) - local command="curl --silent --create-dirs --output "..cachename.." "..name + local command="curl --silent --insecure --create-dirs --output "..cachename.." "..name os.spawn(command) end local function fetch(specification) @@ -16791,8 +16875,8 @@ end -- of closure -- used libraries : l-lua.lua l-package.lua l-lpeg.lua l-function.lua l-string.lua l-table.lua l-io.lua l-number.lua l-set.lua l-os.lua l-file.lua l-gzip.lua l-md5.lua l-url.lua l-dir.lua l-boolean.lua l-unicode.lua l-math.lua util-str.lua util-tab.lua util-sto.lua util-prs.lua util-fmt.lua trac-set.lua trac-log.lua trac-inf.lua trac-pro.lua util-lua.lua util-deb.lua util-mrg.lua util-tpl.lua util-env.lua luat-env.lua lxml-tab.lua lxml-lpt.lua lxml-mis.lua lxml-aux.lua lxml-xml.lua trac-xml.lua data-ini.lua data-exp.lua data-env.lua data-tmp.lua data-met.lua data-res.lua data-pre.lua data-inp.lua data-out.lua data-fil.lua data-con.lua data-use.lua data-zip.lua data-tre.lua data-sch.lua data-lua.lua data-aux.lua data-tmf.lua data-lst.lua util-lib.lua luat-sta.lua luat-fmt.lua -- skipped libraries : - --- original bytes : 689993 --- stripped bytes : 244562 +-- original bytes : 694558 +-- stripped bytes : 246497 -- end library merge diff --git a/tex/context/base/anch-pos.lua b/tex/context/base/anch-pos.lua index 0bd945c8a..c2b62bae7 100644 --- a/tex/context/base/anch-pos.lua +++ b/tex/context/base/anch-pos.lua @@ -14,6 +14,10 @@ more efficient.</p> -- plus (extra) is obsolete but we will keep it for a while +-- context(new_latelua_node(f_enhance(tag))) +-- => +-- context.lateluafunction(function() f_enhance(tag) end) + -- maybe replace texsp by our own converter (stay at the lua end) -- eventually mp will have large numbers so we can use sp there too @@ -174,9 +178,12 @@ local nofpages = nil -- beware ... we're not sparse here as lua will reserve slots for the nilled +local getpos = function() getpos = backends.codeinjections.getpos return getpos () end +local gethpos = function() gethpos = backends.codeinjections.gethpos return gethpos() end +local getvpos = function() getvpos = backends.codeinjections.getvpos return getvpos() end + local function setdim(name,w,h,d,extra) -- will be used when we move to sp allover - local x = pdf.h - local y = pdf.v + local x, y = getpos() if x == 0 then x = nil end if y == 0 then y = nil end if w == 0 then w = nil end @@ -226,10 +233,13 @@ local function enhance(data) data.r = region end if data.x == true then - data.x = pdf.h - end - if data.y == true then - data.y = pdf.v + if data.y == true then + data.x, data.y = getpos() + else + data.x = gethpos() + end + elseif data.y == true then + data.y = getvpos() end if data.p == true then data.p = texgetcount("realpageno") @@ -289,7 +299,7 @@ commands.setpos = setall function jobpositions.b_col(tag) tobesaved[tag] = { r = true, - x = pdf.h, + x = gethpos(), w = 0, } insert(columns,tag) @@ -301,7 +311,7 @@ function jobpositions.e_col(tag) if not t then -- something's wrong else - t.w = pdf.h - t.x + t.w = gethpos() - t.x t.r = region end remove(columns) @@ -328,8 +338,7 @@ end function jobpositions.b_region(tag) local last = tobesaved[tag] - last.x = pdf.h - last.y = pdf.v + last.x, last.y = getpos() last.p = texgetcount("realpageno") insert(regions,tag) region = tag @@ -337,10 +346,11 @@ end function jobpositions.e_region(correct) local last = tobesaved[region] + local v = getvpos() if correct then - last.h = last.y - pdf.v + last.h = last.y - v end - last.y = pdf.v + last.y = v remove(regions) region = regions[#regions] end @@ -357,7 +367,7 @@ function jobpositions.markregionbox(n,tag,correct) tobesaved[tag] = { p = true, x = true, - y = pdf.v, -- true, + y = getvpos(), -- true, w = w ~= 0 and w or nil, h = h ~= 0 and h or nil, d = d ~= 0 and d or nil, @@ -748,7 +758,7 @@ function commands.MPx(id) if jpi then local x = jpi.x if x and x ~= true and x ~= 0 then - context("%.5fpt",x*pt) + context("%.5Fpt",x*pt) return end end @@ -760,7 +770,7 @@ function commands.MPy(id) if jpi then local y = jpi.y if y and y ~= true and y ~= 0 then - context("%.5fpt",y*pt) + context("%.5Fpt",y*pt) return end end @@ -772,7 +782,7 @@ function commands.MPw(id) if jpi then local w = jpi.w if w and w ~= 0 then - context("%.5fpt",w*pt) + context("%.5Fpt",w*pt) return end end @@ -784,7 +794,7 @@ function commands.MPh(id) if jpi then local h = jpi.h if h and h ~= 0 then - context("%.5fpt",h*pt) + context("%.5Fpt",h*pt) return end end @@ -796,7 +806,7 @@ function commands.MPd(id) if jpi then local d = jpi.d if d and d ~= 0 then - context("%.5fpt",d*pt) + context("%.5Fpt",d*pt) return end end @@ -806,7 +816,7 @@ end function commands.MPxy(id) local jpi = collected[id] if jpi then - context('(%.5fpt,%.5fpt)', + context('(%.5Fpt,%.5Fpt)', jpi.x*pt, jpi.y*pt ) @@ -818,7 +828,7 @@ end function commands.MPll(id) local jpi = collected[id] if jpi then - context('(%.5fpt,%.5fpt)', + context('(%.5Fpt,%.5Fpt)', jpi.x *pt, (jpi.y-jpi.d)*pt ) @@ -830,7 +840,7 @@ end function commands.MPlr(id) local jpi = collected[id] if jpi then - context('(%.5fpt,%.5fpt)', + context('(%.5Fpt,%.5Fpt)', (jpi.x + jpi.w)*pt, (jpi.y - jpi.d)*pt ) @@ -842,7 +852,7 @@ end function commands.MPur(id) local jpi = collected[id] if jpi then - context('(%.5fpt,%.5fpt)', + context('(%.5Fpt,%.5Fpt)', (jpi.x + jpi.w)*pt, (jpi.y + jpi.h)*pt ) @@ -854,7 +864,7 @@ end function commands.MPul(id) local jpi = collected[id] if jpi then - context('(%.5fpt,%.5fpt)', + context('(%.5Fpt,%.5Fpt)', jpi.x *pt, (jpi.y + jpi.h)*pt ) @@ -868,7 +878,7 @@ local function MPpos(id) if jpi then local p = jpi.p if p then - context("%s,%.5fpt,%.5fpt,%.5fpt,%.5fpt,%.5fpt", + context("%s,%.5Fpt,%.5Fpt,%.5Fpt,%.5Fpt,%.5Fpt", p, jpi.x*pt, jpi.y*pt, @@ -926,7 +936,7 @@ local function MPpardata(n) t = collected[tag] end if t then - context("%.5fpt,%.5fpt,%.5fpt,%.5fpt,%s,%.5fpt", + context("%.5Fpt,%.5Fpt,%.5Fpt,%.5Fpt,%s,%.5Fpt", t.hs*pt, t.ls*pt, t.rs*pt, @@ -952,7 +962,7 @@ end function commands.MPls(id) local t = collected[id] if t then - context("%.5fpt",t.ls*pt) + context("%.5Fpt",t.ls*pt) else context("0pt") end @@ -961,7 +971,7 @@ end function commands.MPrs(id) local t = collected[id] if t then - context("%.5fpt",t.rs*pt) + context("%.5Fpt",t.rs*pt) else context("0pt") end @@ -994,7 +1004,7 @@ end function commands.MPxywhd(id) local t = collected[id] if t then - context("%.5fpt,%.5fpt,%.5fpt,%.5fpt,%.5fpt", + context("%.5Fpt,%.5Fpt,%.5Fpt,%.5Fpt,%.5Fpt", t.x*pt, t.y*pt, t.w*pt, diff --git a/tex/context/base/attr-ini.lua b/tex/context/base/attr-ini.lua index ad4081681..1e518467c 100644 --- a/tex/context/base/attr-ini.lua +++ b/tex/context/base/attr-ini.lua @@ -38,13 +38,13 @@ storage.register("attributes/names", names, "attributes.names") storage.register("attributes/numbers", numbers, "attributes.numbers") storage.register("attributes/list", list, "attributes.list") -function attributes.define(name,number) -- at the tex end - if not numbers[name] then - numbers[name] = number - names[number] = name - list[number] = { } - end -end +-- function attributes.define(name,number) -- at the tex end +-- if not numbers[name] then +-- numbers[name] = number +-- names[number] = name +-- list[number] = { } +-- end +-- end --[[ldx-- <p>We reserve this one as we really want it to be always set (faster).</p> @@ -58,33 +58,14 @@ are only used when no attribute is set at the \TEX\ end which normally happens in <l n='context'/>.</p> --ldx]]-- -sharedstorage.attributes_last_private = sharedstorage.attributes_last_private or 127 - --- to be considered (so that we can use an array access): --- --- local private = { } attributes.private = private --- --- setmetatable(private, { --- __index = function(t,name) --- local number = sharedstorage.attributes_last_private --- if number < 1023 then -- texgetcount("minallocatedattribute") - 1 --- number = number + 1 --- sharedstorage.attributes_last_private = number --- end --- numbers[name], names[number], list[number] = number, name, { } --- private[name] = number --- return number --- end, --- __call = function(t,name) --- return t[name] --- end --- } ) +sharedstorage.attributes_last_private = sharedstorage.attributes_last_private or 127 +sharedstorage.attributes_last_public = sharedstorage.attributes_last_public or 1024 function attributes.private(name) -- at the lua end (hidden from user) local number = numbers[name] if not number then local last = sharedstorage.attributes_last_private - if last < 1023 then -- texgetcount("minallocatedattribute") - 1 + if last < 1023 then last = last + 1 sharedstorage.attributes_last_private = last else @@ -97,6 +78,29 @@ function attributes.private(name) -- at the lua end (hidden from user) return number end +function attributes.public(name) -- at the lua end (hidden from user) + local number = numbers[name] + if not number then + local last = sharedstorage.attributes_last_public + if last < 65535 then + last = last + 1 + sharedstorage.attributes_last_public = last + else + report_attribute("no more room for public attributes") + os.exit() + end + number = last + numbers[name], names[number], list[number] = number, name, { } + end + return number +end + +attributes.system = attributes.private + +function attributes.define(name,number,category) + return (attributes[category or "public"] or attributes["public"])(name,number) +end + -- tracers local report_attribute = logs.reporter("attributes") @@ -124,11 +128,10 @@ end -- interface -commands.defineattribute = attributes.define -commands.showattributes = attributes.showcurrent +commands.showattributes = attributes.showcurrent -function commands.getprivateattribute(name) - context(attributes.private(name)) +function commands.defineattribute(name,category) + context(attributes.define(name,category)) end -- rather special diff --git a/tex/context/base/attr-ini.mkiv b/tex/context/base/attr-ini.mkiv index 3f49e67a9..0c5762534 100644 --- a/tex/context/base/attr-ini.mkiv +++ b/tex/context/base/attr-ini.mkiv @@ -40,31 +40,51 @@ \newtoks \attributesresetlist -\ifdefined \s!global \else \def\s!global{global} \fi % for metatex % or hard check later -\ifdefined \s!public \else \def\s!public{public} \fi % for metatex % or hard check later - -\unexpanded\def\defineattribute - {\dodoubleempty\attr_basics_define} - -\def\attr_basics_define[#1][#2]% alternatively we can let lua do the housekeeping - {\expandafter\newattribute\csname\??attributecount#1\endcsname - \expandafter\newconstant \csname\??attributeid#1\endcsname - \csname\??attributeid#1\endcsname\c_syst_last_allocated_attribute - \ctxcommand{defineattribute("#1",\number\c_syst_last_allocated_attribute)}% - \doifnotinset\s!global{#2}{\appendetoks\csname\??attributecount#1\endcsname\attributeunsetvalue\to\attributesresetlist}% - \doifinset \s!public{#2}{\expandafter\let\csname#1attribute\expandafter\endcsname\csname\??attributeid#1\endcsname}} - -\unexpanded\def\definesystemattribute - {\dodoubleempty\attr_basics_define_system} - -\def\attr_basics_define_system[#1][#2]% alternatively we can let lua do the housekeeping - {\scratchcounter\ctxcommand{getprivateattribute("#1")}\relax - \expandafter\attributedef\csname\??attributecount#1\endcsname\scratchcounter - \expandafter\newconstant \csname\??attributeid#1\endcsname - \csname\??attributeid#1\endcsname\scratchcounter - %\writestatus\m!system{defining system attribute #1 with number \number\scratchcounter}% - \doifnotinset\s!global{#2}{\appendetoks\csname\??attributecount#1\endcsname\attributeunsetvalue\to\attributesresetlist}% - \doifinset \s!public{#2}{\expandafter\let\csname#1attribute\expandafter\endcsname\csname\??attributeid#1\endcsname}} +\ifdefined \s!global \else \def\s!global {global} \fi % for metatex % or hard check later +\ifdefined \s!public \else \def\s!public {public} \fi % for metatex % or hard check later +\ifdefined \s!attribute \else \def\s!attribute{attribute} \fi % for metatex % or hard check later + +% \unexpanded\def\defineattribute +% {\dodoubleempty\attr_basics_define} +% +% \unexpanded\def\definesystemattribute +% {\dodoubleempty\attr_basics_define_system} +% +% \def\attr_basics_define[#1]% +% {\expandafter\newattribute\csname\??attributecount#1\endcsname +% \expandafter\newconstant \csname\??attributeid#1\endcsname +% \csname\??attributeid#1\endcsname\c_syst_last_allocated_attribute +% \ctxcommand{defineattribute("#1",\number\csname\??attributeid#1\endcsname)}% +% \attr_basics_define_properties[#1]} +% +% \def\attr_basics_define_system[#1]% +% {\scratchcounter\ctxcommand{getprivateattribute("#1")}\relax +% \expandafter\attributedef\csname\??attributecount#1\endcsname\scratchcounter +% \expandafter\newconstant \csname\??attributeid#1\endcsname +% \csname\??attributeid#1\endcsname\scratchcounter +% %\writestatus\m!system{defining system attribute #1 with number \number\scratchcounter}% +% \attr_basics_define_properties[#1]} +% +% \def\attr_basics_define_properties[#1][#2]% +% {\doifnotinset\s!global{#2}{\appendetoks\csname\??attributecount#1\endcsname\attributeunsetvalue\to\attributesresetlist}% +% \doifinset \s!public{#2}{\expandafter\let\csname#1\s!attribute\expandafter\endcsname\csname\??attributeid#1\endcsname}} + +\unexpanded\def\defineattribute {\dodoubleempty\attr_basics_define} +\unexpanded\def\definesystemattribute{\dodoubleempty\attr_basics_define_system} + +\def\attr_basics_define {\attr_basics_define_indeed{public}} +\def\attr_basics_define_system{\attr_basics_define_indeed{private}} + +\def\attr_basics_define_indeed#1[#2][#3]% + {\scratchcounter\ctxcommand{defineattribute("#2","#1")}\relax + %\writestatus\m!system{defining #1 attribute #2 with number \number\scratchcounter}% + \expandafter\attributedef\csname\??attributecount#2\endcsname\scratchcounter + \expandafter\newconstant \csname\??attributeid#2\endcsname + \csname\??attributeid#2\endcsname\scratchcounter + \doifnotinset\s!global{#3}{\appendetoks\csname\??attributecount#2\endcsname\attributeunsetvalue\to\attributesresetlist}% + \doifinset \s!public{#3}{\expandafter\let\csname#2\s!attribute\expandafter\endcsname\csname\??attributeid#2\endcsname}} + +\unexpanded\def\newattribute#1{\attr_basics_define_indeed{public}[\strippedcsname#1][]} % expandable so we can \edef them for speed diff --git a/tex/context/base/attr-lay.mkiv b/tex/context/base/attr-lay.mkiv index d4aae3060..6055c2a73 100644 --- a/tex/context/base/attr-lay.mkiv +++ b/tex/context/base/attr-lay.mkiv @@ -94,8 +94,7 @@ \let\layoutcomponentboxattribute \empty \unexpanded\def\showlayoutcomponents - {%\ctxlua{attributes.viewerlayers.enable()}% automatic - \let\setlayoutcomponentattribute \attr_layoutcomponent_set + {\let\setlayoutcomponentattribute \attr_layoutcomponent_set \let\resetlayoutcomponentattribute\attr_layoutcomponent_reset} \unexpanded\def\attr_layoutcomponent_cleanup diff --git a/tex/context/base/back-exp.lua b/tex/context/base/back-exp.lua index d4133396b..79538eb72 100644 --- a/tex/context/base/back-exp.lua +++ b/tex/context/base/back-exp.lua @@ -609,8 +609,8 @@ function structurestags.setfigure(name,page,width,height) usedimages.image[detailedtag("image")] = { name = name, page = page, - width = number.todimen(width,"cm","%0.3fcm"), - height = number.todimen(height,"cm","%0.3fcm"), + width = number.todimen(width, "cm","%0.3Fcm"), + height = number.todimen(height,"cm","%0.3Fcm"), } end diff --git a/tex/context/base/back-ini.lua b/tex/context/base/back-ini.lua index 6f58b3262..e2dabd91e 100644 --- a/tex/context/base/back-ini.lua +++ b/tex/context/base/back-ini.lua @@ -95,3 +95,11 @@ tables.vfspecials = allocate { startslant = comment, stopslant = comment, } + +-- we'd better have this return something (defaults) + +function codeinjections.getpos () return 0, 0 end +function codeinjections.gethpos () return 0 end +function codeinjections.getvpos () return 0 end +function codeinjections.hasmatrix() return false end +function codeinjections.getmatrix() return 1, 0, 0, 1, 0, 0 end diff --git a/tex/context/base/back-ini.mkiv b/tex/context/base/back-ini.mkiv index fc8759c14..de4ba6138 100644 --- a/tex/context/base/back-ini.mkiv +++ b/tex/context/base/back-ini.mkiv @@ -23,8 +23,9 @@ \unprotect -\ifdefined\everybackendshipout \else \newtoks\everybackendshipout \fi -\ifdefined\everylastbackendshipout \else \newtoks\everylastbackendshipout \fi +\ifdefined\everybackendshipout \else \newtoks\everybackendshipout \fi +\ifdefined\everylastbackendshipout \else \newtoks\everylastbackendshipout \fi +\ifdefined\everybackendlastinshipout \else \newtoks\everybackendlastinshipout \fi % e.g. finalize via latelua %D Right from the start \CONTEXT\ had a backend system based on %D runtime pluggable code. As most backend issues involved specials @@ -126,9 +127,9 @@ %D From now on, mapfile loading is also a special; we assume the %D more or less standard dvips syntax. -\let \doresetmapfilelist \donothing -\let \doloadmapfile \gobbletwoarguments % + - = | filename -\let \doloadmapline \gobbletwoarguments % + - = | fileline +%let \doresetmapfilelist \donothing +%let \doloadmapfile \gobbletwoarguments % + - = | filename +%let \doloadmapline \gobbletwoarguments % + - = | fileline %D \macros %D {jobsuffix} diff --git a/tex/context/base/back-pdf.lua b/tex/context/base/back-pdf.lua index f8a5dab6f..34a28e3f7 100644 --- a/tex/context/base/back-pdf.lua +++ b/tex/context/base/back-pdf.lua @@ -24,7 +24,7 @@ local context = context local sind, cosd = math.sind, math.cosd local insert, remove = table.insert, table.remove -local f_matrix = string.formatters["%0.8f %0.8f %0.8f %0.8f"] +local f_matrix = string.formatters["%0.8F %0.8F %0.8F %0.8F"] function commands.pdfrotation(a) -- todo: check for 1 and 0 and flush sparse diff --git a/tex/context/base/back-pdf.mkiv b/tex/context/base/back-pdf.mkiv index 948a14138..df9594507 100644 --- a/tex/context/base/back-pdf.mkiv +++ b/tex/context/base/back-pdf.mkiv @@ -18,8 +18,8 @@ \registerctxluafile{lpdf-nod}{1.001} \registerctxluafile{lpdf-col}{1.000} \registerctxluafile{lpdf-xmp}{1.001} -\registerctxluafile{lpdf-mis}{1.001} \registerctxluafile{lpdf-ano}{1.001} +\registerctxluafile{lpdf-mis}{1.001} \registerctxluafile{lpdf-ren}{1.001} \registerctxluafile{lpdf-grp}{1.001} \registerctxluafile{lpdf-wid}{1.001} @@ -238,7 +238,7 @@ %D The following will move to the backend \LUA\ code: -\appendtoks \ctxlua{backends.codeinjections.finalizepage ()}\to \everybackendshipout % is immediate +%appendtoks \ctxlua{backends.codeinjections.finalizepage ()}\to \everybackendshipout % is immediate %appendtoks \ctxlua{backends.codeinjections.finalizedocument()}\to \everylastbackendshipout % is immediate %D Temporary hack, will be removed or improved or default. diff --git a/tex/context/base/bibl-bib.lua b/tex/context/base/bibl-bib.lua index 65ca1f9e1..baeb3d2f9 100644 --- a/tex/context/base/bibl-bib.lua +++ b/tex/context/base/bibl-bib.lua @@ -105,7 +105,7 @@ local spacing = space^0 local equal = P("=") local collapsed = (space^1)/ " " -local function add(a,b) if b then return a..b else return a end end +----- function add(a,b) if b then return a..b else return a end end local keyword = C((R("az","AZ","09") + S("@_:-"))^1) -- C((1-space)^1) local s_quoted = ((escape*single) + collapsed + (1-single))^0 diff --git a/tex/context/base/bibl-tra.lua b/tex/context/base/bibl-tra.lua index 75dc3e86f..223554b4d 100644 --- a/tex/context/base/bibl-tra.lua +++ b/tex/context/base/bibl-tra.lua @@ -55,11 +55,11 @@ local ordered = { } local shorts = { } local mode = 0 -local template = utilities.strings.striplong([[ - \citation{*} - \bibstyle{cont-%s} - \bibdata{%s} -]]) +local template = [[ +\citation{*} +\bibstyle{cont-%s} +\bibdata{%s} +]] local bibtexbin = environment.arguments.mlbibtex and "mlbibcontext" or "bibtex" diff --git a/tex/context/base/buff-imp-lua.lua b/tex/context/base/buff-imp-lua.lua index 04e79afba..4396c1ab8 100644 --- a/tex/context/base/buff-imp-lua.lua +++ b/tex/context/base/buff-imp-lua.lua @@ -139,7 +139,7 @@ local comment = P("--") local name = (patterns.letter + patterns.underscore) * (patterns.letter + patterns.underscore + patterns.digit)^0 local boundary = S('()[]{}') -local special = S("-+/*^%=#") + P("..") +local special = S("-+/*^%=#~|<>") + P("..") -- The following longstring parser is taken from Roberto's documentation -- that can be found at http://www.inf.puc-rio.br/~roberto/lpeg/lpeg.html. diff --git a/tex/context/base/buff-ini.lua b/tex/context/base/buff-ini.lua index 08416c9ad..84532f072 100644 --- a/tex/context/base/buff-ini.lua +++ b/tex/context/base/buff-ini.lua @@ -12,9 +12,9 @@ local sub, format = string.sub, string.format local splitlines, validstring = string.splitlines, string.valid local P, Cs, patterns, lpegmatch = lpeg.P, lpeg.Cs, lpeg.patterns, lpeg.match -local trace_run = false trackers .register("buffers.run", function(v) trace_run = v end) -local trace_grab = false trackers .register("buffers.grab", function(v) trace_grab = v end) -local trace_visualize = false trackers .register("buffers.visualize", function(v) trace_visualize = v end) +local trace_run = false trackers.register("buffers.run", function(v) trace_run = v end) +local trace_grab = false trackers.register("buffers.grab", function(v) trace_grab = v end) +local trace_visualize = false trackers.register("buffers.visualize", function(v) trace_visualize = v end) local report_buffers = logs.reporter("buffers","usage") local report_typeset = logs.reporter("buffers","typeset") @@ -143,37 +143,12 @@ local function collectcontent(name,separator) -- no print end local function loadcontent(name) -- no print - local names = getnames(name) - local nnames = #names - local ok = false - if nnames == 0 then - ok = load(getcontent("")) -- default buffer - elseif nnames == 1 then - ok = load(getcontent(names[1])) - else - -- lua 5.2 chunked load - local i = 0 - ok = load(function() - while true do - i = i + 1 - if i > nnames then - return nil - end - local c = getcontent(names[i]) - if c == "" then - -- would trigger end of load - else - return c - end - end - end) - end + local content = collectcontent(name,"\n") + local ok, err = load(content) if ok then return ok() - elseif nnames == 0 then - report_buffers("invalid lua code in default buffer") else - report_buffers("invalid lua code in buffer %a",concat(names,",")) + report_buffers("invalid lua code in buffer %a: %s",name,err or "unknown error") end end diff --git a/tex/context/base/buff-ver.lua b/tex/context/base/buff-ver.lua index 3300ac6cb..14914d42d 100644 --- a/tex/context/base/buff-ver.lua +++ b/tex/context/base/buff-ver.lua @@ -46,70 +46,70 @@ local v_all = variables.all -- beware, all macros have an argument: -local doinlineverbatimnewline = context.doinlineverbatimnewline -local doinlineverbatimbeginline = context.doinlineverbatimbeginline -local doinlineverbatimemptyline = context.doinlineverbatimemptyline -local doinlineverbatimstart = context.doinlineverbatimstart -local doinlineverbatimstop = context.doinlineverbatimstop - -local dodisplayverbatiminitialize = context.dodisplayverbatiminitialize -- the number of arguments might change over time -local dodisplayverbatimnewline = context.dodisplayverbatimnewline -local dodisplayverbatimbeginline = context.dodisplayverbatimbeginline -local dodisplayverbatimemptyline = context.dodisplayverbatimemptyline -local dodisplayverbatimstart = context.dodisplayverbatimstart -local dodisplayverbatimstop = context.dodisplayverbatimstop - -local verbatim = context.verbatim -local doverbatimspace = context.doverbatimspace +local ctx_inlineverbatimnewline = context.doinlineverbatimnewline +local ctx_inlineverbatimbeginline = context.doinlineverbatimbeginline +local ctx_inlineverbatimemptyline = context.doinlineverbatimemptyline +local ctx_inlineverbatimstart = context.doinlineverbatimstart +local ctx_inlineverbatimstop = context.doinlineverbatimstop + +local ctx_displayverbatiminitialize = context.dodisplayverbatiminitialize -- the number of arguments might change over time +local ctx_displayverbatimnewline = context.dodisplayverbatimnewline +local ctx_displayverbatimbeginline = context.dodisplayverbatimbeginline +local ctx_displayverbatimemptyline = context.dodisplayverbatimemptyline +local ctx_displayverbatimstart = context.dodisplayverbatimstart +local ctx_displayverbatimstop = context.dodisplayverbatimstop + +local ctx_verbatim = context.verbatim +local ctx_verbatimspace = context.doverbatimspace local CargOne = Carg(1) local function f_emptyline(s,settings) if settings and settings.nature == "inline" then - doinlineverbatimemptyline() + ctx_inlineverbatimemptyline() else - dodisplayverbatimemptyline() + ctx_displayverbatimemptyline() end end local function f_beginline(s,settings) if settings and settings.nature == "inline" then - doinlineverbatimbeginline() + ctx_inlineverbatimbeginline() else - dodisplayverbatimbeginline() + ctx_displayverbatimbeginline() end end local function f_newline(s,settings) if settings and settings.nature == "inline" then - doinlineverbatimnewline() + ctx_inlineverbatimnewline() else - dodisplayverbatimnewline() + ctx_displayverbatimnewline() end end local function f_start(s,settings) if settings and settings.nature == "inline" then - doinlineverbatimstart() + ctx_inlineverbatimstart() else - dodisplayverbatimstart() + ctx_displayverbatimstart() end end local function f_stop(s,settings) if settings and settings.nature == "inline" then - doinlineverbatimstop() + ctx_inlineverbatimstop() else - dodisplayverbatimstop() + ctx_displayverbatimstop() end end local function f_default(s) -- (s,settings) - verbatim(s) + ctx_verbatim(s) end local function f_space() -- (s,settings) - doverbatimspace() + ctx_verbatimspace() end local function f_signal() -- (s,settings) @@ -200,7 +200,7 @@ local function getvisualizer(method,nature) end end -local fallback = context.verbatim +local ctx_fallback = ctx_verbatim local function makepattern(visualizer,replacement,pattern) if not pattern then @@ -208,9 +208,9 @@ local function makepattern(visualizer,replacement,pattern) return patterns.alwaystrue else if type(visualizer) == "table" and type(replacement) == "string" then - replacement = visualizer[replacement] or fallback + replacement = visualizer[replacement] or ctx_fallback else - replacement = fallback + replacement = ctx_fallback end return (C(pattern) * CargOne) / replacement end @@ -506,7 +506,7 @@ local function visualize(content,settings) -- maybe also method in settings if trace_visualize then report_visualizers("visualize using method %a",method) end - fallback(content,1,settings) + ctx_fallback(content,1,settings) end end end @@ -711,7 +711,7 @@ commands.loadvisualizer = visualizers.load function commands.typebuffer(settings) local lines = getlines(settings.name) if lines then - dodisplayverbatiminitialize(#lines) + ctx_displayverbatiminitialize(#lines) local content, m = filter(lines,settings) if content and content ~= "" then -- content = decodecomment(content) diff --git a/tex/context/base/buff-ver.mkiv b/tex/context/base/buff-ver.mkiv index 6c4fb6fc1..17dfd9d69 100644 --- a/tex/context/base/buff-ver.mkiv +++ b/tex/context/base/buff-ver.mkiv @@ -19,6 +19,8 @@ \unprotect +\startcontextdefinitioncode + \definesystemattribute[verbatimline][public] \appendtoksonce @@ -169,7 +171,11 @@ \appendtoks \setuevalue{\e!start\currenttyping}{\buff_verbatim_typing_start{\currenttyping}}% \setuevalue{\e!stop \currenttyping}{\buff_verbatim_typing_stop {\currenttyping}}% - \normalexpanded{\definelinenumbering[\currenttyping]}% + \ifx\currenttypingparent\empty + \normalexpanded{\definelinenumbering[\currenttyping]}% + \else + \normalexpanded{\definelinenumbering[\currenttyping][\currenttypingparent]}% + \fi \to \everydefinetyping \appendtoks @@ -261,7 +267,7 @@ {\dontleavehmode \bgroup \edef\currenttype{#1}% - \doifnextoptionalelse\buff_verbatim_type_yes\buff_verbatim_type_nop} + \doifnextoptionalcselse\buff_verbatim_type_yes\buff_verbatim_type_nop} \def\buff_verbatim_type_yes[#1]% {\setupcurrenttype[#1]% @@ -277,7 +283,7 @@ \edef\currenttype{#1}% \lettypeparameter\c!lines\v!hyphenated \let\specialobeyedspace\specialstretchedspace - \doifnextoptionalelse\buff_verbatim_type_yes\buff_verbatim_type_nop} + \doifnextoptionalcselse\buff_verbatim_type_yes\buff_verbatim_type_nop} \def\buff_verbatim_type_one {\ifx\next\bgroup @@ -696,8 +702,11 @@ \definetyping[\v!typing] -\setuptyping[\v!file] [\s!parent=\??typing\v!typing] % we don't want \start..\stop overload -\setuptyping[\v!buffer][\s!parent=\??typing\v!file] % we don't want \start..\stop overload +\setuptyping [\v!file] [\s!parent=\??typing \v!typing] % we don't want \start..\stop overload +\setuplinenumbering[\v!file] [\s!parent=\??linenumbering\v!typing] + +\setuptyping [\v!buffer][\s!parent=\??typing \v!file] % we don't want \start..\stop overload +\setuplinenumbering[\v!buffer][\s!parent=\??linenumbering\v!file] %D The setups for inline verbatim default to: @@ -910,4 +919,6 @@ \def\tex #1{\letterbackslash#1}% \to \everysimplifycommands +\stopcontextdefinitioncode + \protect \endinput diff --git a/tex/context/base/catc-ctx.mkiv b/tex/context/base/catc-ctx.mkiv index ddade7f52..5af8a5035 100644 --- a/tex/context/base/catc-ctx.mkiv +++ b/tex/context/base/catc-ctx.mkiv @@ -142,4 +142,38 @@ \normalprotected\def\stopcontextcode {\popcatcodetable} +% not visible, only for special cases + +\newcatcodetable \ctdcatcodes % context definitions + +\startcatcodetable \ctdcatcodes + \catcode\tabasciicode \ignorecatcode + \catcode\endoflineasciicode \ignorecatcode + \catcode\formfeedasciicode \ignorecatcode + \catcode\spaceasciicode \ignorecatcode + \catcode\endoffileasciicode \ignorecatcode + \catcode\circumflexasciicode \superscriptcatcode % candidate + \catcode\underscoreasciicode \lettercatcode + \catcode\ampersandasciicode \alignmentcatcode +% \catcode\colonasciicode \lettercatcode % candidate + \catcode\backslashasciicode \escapecatcode + \catcode\leftbraceasciicode \begingroupcatcode + \catcode\rightbraceasciicode \endgroupcatcode + \catcode\dollarasciicode \mathshiftcatcode + \catcode\hashasciicode \parametercatcode + \catcode\commentasciicode \commentcatcode + \catcode\atsignasciicode \lettercatcode + \catcode\exclamationmarkasciicode\lettercatcode + \catcode\questionmarkasciicode \lettercatcode + \catcode\tildeasciicode \activecatcode + \catcode\barasciicode \activecatcode +\stopcatcodetable + +\normalprotected\def\startcontextdefinitioncode + {\pushcatcodetable + \catcodetable\ctdcatcodes} + +\normalprotected\def\stopcontextdefinitioncode + {\popcatcodetable} + \endinput diff --git a/tex/context/base/catc-ini.lua b/tex/context/base/catc-ini.lua index d4f9b65af..9241f5a1b 100644 --- a/tex/context/base/catc-ini.lua +++ b/tex/context/base/catc-ini.lua @@ -39,3 +39,7 @@ end table.setmetatableindex(numbers,function(t,k) if type(k) == "number" then t[k] = k return k end end) table.setmetatableindex(names, function(t,k) if type(k) == "string" then t[k] = k return k end end) + +commands.registercatcodetable = catcodes.register +--------.definecatcodetable = characters.define -- not yet defined +--------.setcharactercodes = characters.setcodes -- not yet defined diff --git a/tex/context/base/catc-ini.mkiv b/tex/context/base/catc-ini.mkiv index 791ce31c4..d8247217c 100644 --- a/tex/context/base/catc-ini.mkiv +++ b/tex/context/base/catc-ini.mkiv @@ -108,7 +108,7 @@ \expandafter\xdef\csname\??catcodetablen\number\c_syst_catcodes_n\endcsname{\string#1}% logging \newconstant#1% #1\c_syst_catcodes_n - \ctxlua{catcodes.register("\expandafter\gobbleoneargument\string#1",\number#1)}} + \ctxcommand{registercatcodetable("\expandafter\gobbleoneargument\string#1",\number#1)}} \newtoks \everysetdefaultcatcodes diff --git a/tex/context/base/catc-xml.mkiv b/tex/context/base/catc-xml.mkiv index 5e7df11f5..a23a2fe0a 100644 --- a/tex/context/base/catc-xml.mkiv +++ b/tex/context/base/catc-xml.mkiv @@ -114,20 +114,11 @@ %D We register the catcodetables at the \LUA\ end where some further %D initializations take place. -\ctxlua { - characters.define( - { % letter catcodes - \number\xmlcatcodesn, - \number\xmlcatcodese, - \number\xmlcatcodesr, - }, - { % activate catcodes - \number\xmlcatcodesn, - \number\xmlcatcodese, - \number\xmlcatcodesr, - } - ) - catcodes.register("xmlcatcodes",\number\xmlcatcodes) -} +\ctxcommand{definecatcodetable( + {\number\xmlcatcodesn,\number\xmlcatcodese,\number\xmlcatcodesr},% letter catcodes + {\number\xmlcatcodesn,\number\xmlcatcodese,\number\xmlcatcodesr} % activate catcodes +)} + +\ctxcommand{registercatcodetable("xmlcatcodes",\number\xmlcatcodes)} \endinput diff --git a/tex/context/base/char-ini.lua b/tex/context/base/char-ini.lua index ac47760f3..d6e8d18a9 100644 --- a/tex/context/base/char-ini.lua +++ b/tex/context/base/char-ini.lua @@ -1214,3 +1214,6 @@ end -- entities.amp = utfchar(characters.activeoffset + utfbyte("&")) -- entities.gt = utfchar(characters.activeoffset + utfbyte(">")) -- end + +commands.definecatcodetable = characters.define +commands.setcharactercodes = characters.setcodes diff --git a/tex/context/base/char-ini.mkiv b/tex/context/base/char-ini.mkiv index 113d26709..db52ae723 100644 --- a/tex/context/base/char-ini.mkiv +++ b/tex/context/base/char-ini.mkiv @@ -65,32 +65,30 @@ % \def\setcclcuc#1#2#3{\global\catcode#1=\lettercatcode\global\lccode#1=#2\global\uccode#1=#3\relax} % \def\setcclcucself#1{\global\catcode#1=\lettercatcode\global\lccode#1=#1\global\uccode#1=#1\relax } -\ctxlua{characters.setcodes()} +\ctxcommand{setcharactercodes()} % Is setting up vrb tpa and tpb needed? -\ctxlua { - characters.define( - { % letter catcodes - \number\texcatcodes, - \number\ctxcatcodes, - \number\notcatcodes, - %number\mthcatcodes, - \number\vrbcatcodes, - \number\prtcatcodes, - \number\tpacatcodes, - \number\tpbcatcodes, - \number\txtcatcodes, - }, - { % activate catcodes - \number\ctxcatcodes, - \number\notcatcodes, - \number\prtcatcodes, % new - } - ) -% catcodes.register("xmlcatcodes",\number\xmlcatcodes) -} +\ctxcommand{definecatcodetable( + { % letter catcodes + \number\texcatcodes, + \number\ctxcatcodes, + \number\notcatcodes, + %number\mthcatcodes, + \number\vrbcatcodes, + \number\prtcatcodes, + \number\tpacatcodes, + \number\tpbcatcodes, + \number\txtcatcodes, + }, + { % activate catcodes + \number\ctxcatcodes, + \number\notcatcodes, + \number\prtcatcodes, % new + } +)} -\def\chardescription#1{\ctxcommand{chardescription(\number#1)}} +\def\chardescription#1% + {\ctxcommand{chardescription(\number#1)}} \protect \endinput diff --git a/tex/context/base/cldf-ini.lua b/tex/context/base/cldf-ini.lua index b29db4090..0a0f71266 100644 --- a/tex/context/base/cldf-ini.lua +++ b/tex/context/base/cldf-ini.lua @@ -23,6 +23,11 @@ if not modules then modules = { } end modules ['cldf-ini'] = { -- todo: context("%bold{total: }%s",total) -- todo: context.documentvariable("title") +-- during the crited project we ran into the situation that luajittex was 10-20 times +-- slower that luatex ... after 3 days of testing and probing we finally figured out that +-- the the differences between the lua and luajit hashers can lead to quite a slowdown +-- in some cases. + local tex = tex context = context or { } @@ -37,7 +42,6 @@ local formatters = string.formatters -- using formatteds is slower in this case local loaddata = io.loaddata local texsprint = tex.sprint -local textprint = tex.tprint local texprint = tex.print local texwrite = tex.write local texgetcount = tex.getcount @@ -64,72 +68,239 @@ local report_cld = logs.reporter("cld","stack") local processlines = true -- experiments.register("context.processlines", function(v) processlines = v end) --- for tracing it's easier to have two stacks +-- In earlier experiments a function tables was referred to as lua.calls and the +-- primitive \luafunctions was \luacall. -local _stack_f_, _n_f_ = { }, 0 -local _stack_n_, _n_n_ = { }, 0 +local luafunctions = lua.get_functions_table and lua.get_functions_table() +local usedstack = nil +local showstackusage = false -local function _store_f_(ti) - _n_f_ = _n_f_ + 1 - _stack_f_[_n_f_] = ti - return _n_f_ -end +-- luafunctions = false -local function _store_n_(ti) - _n_n_ = _n_n_ + 1 - _stack_n_[_n_n_] = ti - return _n_n_ -end +trackers.register("context.stack",function(v) showstackusage = v end) -local function _flush_f_(n) - local sn = _stack_f_[n] - if not sn then - report_cld("data with id %a cannot be found on stack",n) - else - local tn = type(sn) - if tn == "function" then - if not sn() and texgetcount("@@trialtypesetting") == 0 then -- @@trialtypesetting is private! - _stack_f_[n] = nil - else - -- keep, beware, that way the stack can grow - end +local storefunction, flushfunction +local storenode, flushnode +local registerfunction, unregisterfunction, reservefunction, knownfunctions, callfunctiononce + +if luafunctions then + + local freed, nofused, noffreed = { }, 0, 0 -- maybe use the number of @@trialtypesetting + + usedstack = function() + return nofused, noffreed + end + + flushfunction = function(slot,arg) + if arg() then + -- keep + elseif texgetcount("@@trialtypesetting") == 0 then -- @@trialtypesetting is private! + noffreed = noffreed + 1 + freed[noffreed] = slot + luafunctions[slot] = false else - if texgetcount("@@trialtypesetting") == 0 then -- @@trialtypesetting is private! - writenode(sn) - _stack_f_[n] = nil - else - writenode(copynodelist(sn)) - -- keep, beware, that way the stack can grow - end + -- keep end end -end -local function _flush_n_(n) - local sn = _stack_n_[n] - if not sn then - report_cld("data with id %a cannot be found on stack",n) - elseif texgetcount("@@trialtypesetting") == 0 then -- @@trialtypesetting is private! - writenode(sn) - _stack_n_[n] = nil - else - writenode(copynodelist(sn)) - -- keep, beware, that way the stack can grow + storefunction = function(arg) + local f = function(slot) flushfunction(slot,arg) end + if noffreed > 0 then + local n = freed[noffreed] + freed[noffreed] = nil + noffreed = noffreed - 1 + luafunctions[n] = f + return n + else + nofused = nofused + 1 + luafunctions[nofused] = f + return nofused + end + end + + flushnode = function(slot,arg) + if texgetcount("@@trialtypesetting") == 0 then -- @@trialtypesetting is private! + writenode(arg) + noffreed = noffreed + 1 + freed[noffreed] = slot + luafunctions[slot] = false + else + writenode(copynodelist(arg)) + end + end + + storenode = function(arg) + local f = function(slot) flushnode(slot,arg) end + if noffreed > 0 then + local n = freed[noffreed] + freed[noffreed] = nil + noffreed = noffreed - 1 + luafunctions[n] = f + return n + else + nofused = nofused + 1 + luafunctions[nofused] = f + return nofused + end + end + + registerfunction = function(f) + if type(f) == "string" then + f = loadstring(f) + end + if type(f) ~= "function" then + f = function() report_cld("invalid function %A",f) end + end + if noffreed > 0 then + local n = freed[noffreed] + freed[noffreed] = nil + noffreed = noffreed - 1 + luafunctions[n] = f + return n + else + nofused = nofused + 1 + luafunctions[nofused] = f + return nofused + end + end + + unregisterfunction = function(slot) + if luafunctions[slot] then + noffreed = noffreed + 1 + freed[noffreed] = slot + luafunctions[slot] = false + else + report_cld("invalid function slot %A",slot) + end + end + + reservefunction = function() + if noffreed > 0 then + local n = freed[noffreed] + freed[noffreed] = nil + noffreed = noffreed - 1 + return n + else + nofused = nofused + 1 + return nofused + end end -end -function context.restart() - _stack_f_, _n_f_ = { }, 0 - _stack_n_, _n_n_ = { }, 0 + callfunctiononce = function(slot) + luafunctions[slot](slot) + noffreed = noffreed + 1 + freed[noffreed] = slot + luafunctions[slot] = false + end + + table.setmetatablecall(luafunctions,function(t,n) return luafunctions[n](n) end) + + knownfunctions = luafunctions + +else + + local luafunctions, noffunctions = { }, 0 + local luanodes, nofnodes = { }, 0 + + usedstack = function() + return noffunctions + nofnodes, 0 + end + + flushfunction = function(n) + local sn = luafunctions[n] + if not sn then + report_cld("data with id %a cannot be found on stack",n) + elseif not sn() and texgetcount("@@trialtypesetting") == 0 then -- @@trialtypesetting is private! + luafunctions[n] = nil + end + end + + storefunction = function(ti) + noffunctions = noffunctions + 1 + luafunctions[noffunctions] = ti + return noffunctions + end + + -- freefunction = function(n) + -- luafunctions[n] = nil + -- end + + flushnode = function(n) + local sn = luanodes[n] + if not sn then + report_cld("data with id %a cannot be found on stack",n) + elseif texgetcount("@@trialtypesetting") == 0 then -- @@trialtypesetting is private! + writenode(sn) + luanodes[n] = nil + else + writenode(copynodelist(sn)) + end + end + + storenode = function(ti) + nofnodes = nofnodes + 1 + luanodes[nofnodes] = ti + return nofnodes + end + + _cldf_ = flushfunction -- global + _cldn_ = flushnode -- global + -- _cldl_ = function(n) return luafunctions[n]() end -- luafunctions(n) + _cldl_ = luafunctions + + registerfunction = function(f) + if type(f) == "string" then + f = loadstring(f) + end + if type(f) ~= "function" then + f = function() report_cld("invalid function %A",f) end + end + noffunctions = noffunctions + 1 + luafunctions[noffunctions] = f + return noffunctions + end + + unregisterfunction = function(slot) + if luafunctions[slot] then + luafunctions[slot] = nil + else + report_cld("invalid function slot %A",slot) + end + end + + reservefunction = function() + noffunctions = noffunctions + 1 + return noffunctions + end + + callfunctiononce = function(slot) + luafunctions[slot](slot) + luafunctions[slot] = nil + end + + table.setmetatablecall(luafunctions,function(t,n) return luafunctions[n](n) end) + + knownfunctions = luafunctions + end -context._stack_f_ = _stack_f_ -context._store_f_ = _store_f_ -context._flush_f_ = _flush_f_ _cldf_ = _flush_f_ +context.registerfunction = registerfunction +context.unregisterfunction = unregisterfunction +context.reservefunction = reservefunction +context.knownfunctions = knownfunctions +context.callfunctiononce = callfunctiononce _cldo_ = callfunctiononce +context.storenode = storenode -- private helper + +function commands.ctxfunction(code) + context(registerfunction(code)) +end -context._stack_n_ = _stack_n_ -context._store_n_ = _store_n_ -context._flush_n_ = _flush_n_ _cldn_ = _flush_n_ +-- local f_cldo = formatters["_cldo_(%i)"] +-- local latelua_node = nodes.pool.latelua +-- +-- function context.lateluafunctionnnode(f) +-- return latelua_node(f_cldo(registerfunction(f))) +-- end -- Should we keep the catcodes with the function? @@ -359,98 +530,210 @@ end local containseol = patterns.containseol -local function writer(parent,command,first,...) -- already optimized before call - local t = { first, ... } - flush(currentcatcodes,command) -- todo: ctx|prt|texcatcodes - local direct = false - for i=1,#t do - local ti = t[i] - local typ = type(ti) - if direct then - if typ == "string" or typ == "number" then - flush(currentcatcodes,ti) - else -- node.write - report_context("error: invalid use of direct in %a, only strings and numbers can be flushed directly, not %a",command,typ) - end - direct = false - elseif ti == nil then - -- nothing - elseif ti == "" then - flush(currentcatcodes,"{}") - elseif typ == "string" then - -- is processelines seen ? - if processlines and lpegmatch(containseol,ti) then - flush(currentcatcodes,"{") - local flushlines = parent.__flushlines or flushlines - flushlines(ti) - flush(currentcatcodes,"}") - elseif currentcatcodes == contentcatcodes then +local writer + +if luafunctions then + + writer = function (parent,command,first,...) -- already optimized before call + local t = { first, ... } + flush(currentcatcodes,command) -- todo: ctx|prt|texcatcodes + local direct = false + for i=1,#t do + local ti = t[i] + local typ = type(ti) + if direct then + if typ == "string" or typ == "number" then + flush(currentcatcodes,ti) + else -- node.write + report_context("error: invalid use of direct in %a, only strings and numbers can be flushed directly, not %a",command,typ) + end + direct = false + elseif ti == nil then + -- nothing + elseif ti == "" then + flush(currentcatcodes,"{}") + elseif typ == "string" then + -- is processelines seen ? + if processlines and lpegmatch(containseol,ti) then + flush(currentcatcodes,"{") + local flushlines = parent.__flushlines or flushlines + flushlines(ti) + flush(currentcatcodes,"}") + elseif currentcatcodes == contentcatcodes then + flush(currentcatcodes,"{",ti,"}") + else + flush(currentcatcodes,"{") + flush(contentcatcodes,ti) + flush(currentcatcodes,"}") + end + elseif typ == "number" then + -- numbers never have funny catcodes flush(currentcatcodes,"{",ti,"}") - else - flush(currentcatcodes,"{") - flush(contentcatcodes,ti) - flush(currentcatcodes,"}") - end - elseif typ == "number" then - -- numbers never have funny catcodes - flush(currentcatcodes,"{",ti,"}") - elseif typ == "table" then - local tn = #ti - if tn == 0 then - local done = false - for k, v in next, ti do - if done then - if v == "" then - flush(currentcatcodes,",",k,'=') + elseif typ == "table" then + local tn = #ti + if tn == 0 then + local done = false + for k, v in next, ti do + if done then + if v == "" then + flush(currentcatcodes,",",k,'=') + else + flush(currentcatcodes,",",k,"={",v,"}") + end else - flush(currentcatcodes,",",k,"={",v,"}") + if v == "" then + flush(currentcatcodes,"[",k,"=") + else + flush(currentcatcodes,"[",k,"={",v,"}") + end + done = true end + end + if done then + flush(currentcatcodes,"]") + else + flush(currentcatcodes,"[]") + end + elseif tn == 1 then -- some 20% faster than the next loop + local tj = ti[1] + if type(tj) == "function" then + flush(currentcatcodes,"[\\cldl",storefunction(tj),"]") else - if v == "" then - flush(currentcatcodes,"[",k,"=") + flush(currentcatcodes,"[",tj,"]") + end + else -- is concat really faster than flushes here? probably needed anyway (print artifacts) + flush(currentcatcodes,"[") + for j=1,tn do + local tj = ti[j] + if type(tj) == "function" then + if j == tn then + flush(currentcatcodes,"\\cldl",storefunction(tj),"]") + else + flush(currentcatcodes,"\\cldl",storefunction(tj),",") + end else - flush(currentcatcodes,"[",k,"={",v,"}") + if j == tn then + flush(currentcatcodes,tj,"]") + else + flush(currentcatcodes,tj,",") + end end - done = true end end - if done then - flush(currentcatcodes,"]") + elseif typ == "function" then + flush(currentcatcodes,"{\\cldl ",storefunction(ti),"}") -- todo: ctx|prt|texcatcodes + elseif typ == "boolean" then + if ti then + flushdirect(currentcatcodes,"\r") else - flush(currentcatcodes,"[]") + direct = true + end + elseif typ == "thread" then + report_context("coroutines not supported as we cannot yield across boundaries") + elseif isnode(ti) then -- slow + flush(currentcatcodes,"{\\cldl",storenode(ti),"}") + else + report_context("error: %a gets a weird argument %a",command,ti) + end + end + end + +else + + writer = function (parent,command,first,...) -- already optimized before call + local t = { first, ... } + flush(currentcatcodes,command) -- todo: ctx|prt|texcatcodes + local direct = false + for i=1,#t do + local ti = t[i] + local typ = type(ti) + if direct then + if typ == "string" or typ == "number" then + flush(currentcatcodes,ti) + else -- node.write + report_context("error: invalid use of direct in %a, only strings and numbers can be flushed directly, not %a",command,typ) end - elseif tn == 1 then -- some 20% faster than the next loop - local tj = ti[1] - if type(tj) == "function" then - flush(currentcatcodes,"[\\cldf{",_store_f_(tj),"}]") + direct = false + elseif ti == nil then + -- nothing + elseif ti == "" then + flush(currentcatcodes,"{}") + elseif typ == "string" then + -- is processelines seen ? + if processlines and lpegmatch(containseol,ti) then + flush(currentcatcodes,"{") + local flushlines = parent.__flushlines or flushlines + flushlines(ti) + flush(currentcatcodes,"}") + elseif currentcatcodes == contentcatcodes then + flush(currentcatcodes,"{",ti,"}") else - flush(currentcatcodes,"[",tj,"]") + flush(currentcatcodes,"{") + flush(contentcatcodes,ti) + flush(currentcatcodes,"}") end - else -- is concat really faster than flushes here? probably needed anyway (print artifacts) - for j=1,tn do - local tj = ti[j] + elseif typ == "number" then + -- numbers never have funny catcodes + flush(currentcatcodes,"{",ti,"}") + elseif typ == "table" then + local tn = #ti + if tn == 0 then + local done = false + for k, v in next, ti do + if done then + if v == "" then + flush(currentcatcodes,",",k,'=') + else + flush(currentcatcodes,",",k,"={",v,"}") + end + else + if v == "" then + flush(currentcatcodes,"[",k,"=") + else + flush(currentcatcodes,"[",k,"={",v,"}") + end + done = true + end + end + if done then + flush(currentcatcodes,"]") + else + flush(currentcatcodes,"[]") + end + elseif tn == 1 then -- some 20% faster than the next loop + local tj = ti[1] if type(tj) == "function" then - ti[j] = "\\cldf{" .. _store_f_(tj) .. "}" + flush(currentcatcodes,"[\\cldf{",storefunction(tj),"}]") + else + flush(currentcatcodes,"[",tj,"]") + end + else -- is concat really faster than flushes here? probably needed anyway (print artifacts) + for j=1,tn do + local tj = ti[j] + if type(tj) == "function" then + ti[j] = "\\cldf{" .. storefunction(tj) .. "}" + end end + flush(currentcatcodes,"[",concat(ti,","),"]") end - flush(currentcatcodes,"[",concat(ti,","),"]") - end - elseif typ == "function" then - flush(currentcatcodes,"{\\cldf{",_store_f_(ti),"}}") -- todo: ctx|prt|texcatcodes - elseif typ == "boolean" then - if ti then - flushdirect(currentcatcodes,"\r") + elseif typ == "function" then + flush(currentcatcodes,"{\\cldf{",storefunction(ti),"}}") -- todo: ctx|prt|texcatcodes + elseif typ == "boolean" then + if ti then + flushdirect(currentcatcodes,"\r") + else + direct = true + end + elseif typ == "thread" then + report_context("coroutines not supported as we cannot yield across boundaries") + elseif isnode(ti) then -- slow + flush(currentcatcodes,"{\\cldn{",storenode(ti),"}}") else - direct = true + report_context("error: %a gets a weird argument %a",command,ti) end - elseif typ == "thread" then - report_context("coroutines not supported as we cannot yield across boundaries") - elseif isnode(ti) then -- slow - flush(currentcatcodes,"{\\cldn{",_store_n_(ti),"}}") - else - report_context("error: %a gets a weird argument %a",command,ti) end end + end local generics = { } context.generics = generics @@ -507,70 +790,154 @@ end function context.constructcsonly(k) -- not much faster than the next but more mem efficient local c = "\\" .. tostring(generics[k] or k) - rawset(context, k, function() + local v = function() flush(prtcatcodes,c) - end) + end + rawset(context,k,v) + return v end function context.constructcs(k) local c = "\\" .. tostring(generics[k] or k) - rawset(context, k, function(first,...) + local v = function(first,...) if first == nil then flush(prtcatcodes,c) else return writer(context,c,first,...) end - end) + end + rawset(context,k,v) + return v end -local function caller(parent,f,a,...) - if not parent then - -- so we don't need to test in the calling (slower but often no issue) - elseif f ~= nil then - local typ = type(f) - if typ == "string" then - if a then - flush(contentcatcodes,formatters[f](a,...)) -- was currentcatcodes - elseif processlines and lpegmatch(containseol,f) then - local flushlines = parent.__flushlines or flushlines - flushlines(f) - else - flush(contentcatcodes,f) - end - elseif typ == "number" then - if a then - flush(currentcatcodes,f,a,...) +-- local splitformatters = utilities.strings.formatters.new(true) -- not faster (yet) + +local caller + +if luafunctions then + + caller = function(parent,f,a,...) + if not parent then + -- so we don't need to test in the calling (slower but often no issue) + elseif f ~= nil then + local typ = type(f) + if typ == "string" then + if f == "" then + -- new, can save a bit sometimes + -- if trace_context then + -- report_context("empty argument to context()") + -- end + elseif a then + flush(contentcatcodes,formatters[f](a,...)) -- was currentcatcodes + -- flush(contentcatcodes,splitformatters[f](a,...)) -- was currentcatcodes + elseif processlines and lpegmatch(containseol,f) then + local flushlines = parent.__flushlines or flushlines + flushlines(f) + else + flush(contentcatcodes,f) + end + elseif typ == "number" then + if a then + flush(currentcatcodes,f,a,...) + else + flush(currentcatcodes,f) + end + elseif typ == "function" then + -- ignored: a ... + flush(currentcatcodes,"{\\cldl",storefunction(f),"}") -- todo: ctx|prt|texcatcodes + elseif typ == "boolean" then + if f then + if a ~= nil then + local flushlines = parent.__flushlines or flushlines + flushlines(a) + else + flushdirect(currentcatcodes,"\n") -- no \r, else issues with \startlines ... use context.par() otherwise + end + else + if a ~= nil then + -- no command, same as context(a,...) + writer(parent,"",a,...) + else + -- ignored + end + end + elseif typ == "thread" then + report_context("coroutines not supported as we cannot yield across boundaries") + elseif isnode(f) then -- slow + -- writenode(f) + flush(currentcatcodes,"\\cldl",storenode(f)," ") else - flush(currentcatcodes,f) + report_context("error: %a gets a weird argument %a","context",f) end - elseif typ == "function" then - -- ignored: a ... - flush(currentcatcodes,"{\\cldf{",_store_f_(f),"}}") -- todo: ctx|prt|texcatcodes - elseif typ == "boolean" then - if f then - if a ~= nil then + end + end + + function context.flushnode(n) + flush(currentcatcodes,"\\cldl",storenode(n)," ") + end + +else + + caller = function(parent,f,a,...) + if not parent then + -- so we don't need to test in the calling (slower but often no issue) + elseif f ~= nil then + local typ = type(f) + if typ == "string" then + if f == "" then + -- new, can save a bit sometimes + -- if trace_context then + -- report_context("empty argument to context()") + -- end + elseif a then + flush(contentcatcodes,formatters[f](a,...)) -- was currentcatcodes + -- flush(contentcatcodes,splitformatters[f](a,...)) -- was currentcatcodes + elseif processlines and lpegmatch(containseol,f) then local flushlines = parent.__flushlines or flushlines - flushlines(a) + flushlines(f) else - flushdirect(currentcatcodes,"\n") -- no \r, else issues with \startlines ... use context.par() otherwise + flush(contentcatcodes,f) end - else - if a ~= nil then - -- no command, same as context(a,...) - writer(parent,"",a,...) + elseif typ == "number" then + if a then + flush(currentcatcodes,f,a,...) + else + flush(currentcatcodes,f) + end + elseif typ == "function" then + -- ignored: a ... + flush(currentcatcodes,"{\\cldf{",storefunction(f),"}}") -- todo: ctx|prt|texcatcodes + elseif typ == "boolean" then + if f then + if a ~= nil then + local flushlines = parent.__flushlines or flushlines + flushlines(a) + else + flushdirect(currentcatcodes,"\n") -- no \r, else issues with \startlines ... use context.par() otherwise + end else - -- ignored + if a ~= nil then + -- no command, same as context(a,...) + writer(parent,"",a,...) + else + -- ignored + end end + elseif typ == "thread" then + report_context("coroutines not supported as we cannot yield across boundaries") + elseif isnode(f) then -- slow + -- writenode(f) + flush(currentcatcodes,"\\cldn{",storenode(f),"}") + else + report_context("error: %a gets a weird argument %a","context",f) end - elseif typ == "thread" then - report_context("coroutines not supported as we cannot yield across boundaries") - elseif isnode(f) then -- slow - -- writenode(f) - flush(currentcatcodes,"\\cldn{",_store_n_(f),"}") - else - report_context("error: %a gets a weird argument %a","context",f) end end + + function context.flushnode(n) + flush(currentcatcodes,"\\cldn{",storenode(n),"}") + end + end local defaultcaller = caller @@ -642,8 +1009,12 @@ local visualizer = lpeg.replacer { } statistics.register("traced context", function() + local used, freed = usedstack() + local unreachable = used - freed if nofwriters > 0 or nofflushes > 0 then - return format("writers: %s, flushes: %s, maxstack: %s",nofwriters,nofflushes,_n_f_) + return format("writers: %s, flushes: %s, maxstack: %s",nofwriters,nofflushes,used,freed,unreachable) + elseif showstackusage or unreachable > 0 then + return format("maxstack: %s, freed: %s, unreachable: %s",used,freed,unreachable) end end) @@ -1019,7 +1390,8 @@ local function caller(parent,f,a,...) end elseif typ == "function" then -- ignored: a ... - flush(currentcatcodes,mpdrawing,"{\\cldf{",store_(f),"}}") +-- flush(currentcatcodes,mpdrawing,"{\\cldf{",store_(f),"}}") + flush(currentcatcodes,mpdrawing,"{\\cldl",store_(f),"}") elseif typ == "boolean" then -- ignored: a ... if f then diff --git a/tex/context/base/cldf-ini.mkiv b/tex/context/base/cldf-ini.mkiv index 258409d7a..12ada1383 100644 --- a/tex/context/base/cldf-ini.mkiv +++ b/tex/context/base/cldf-ini.mkiv @@ -36,6 +36,15 @@ \def\cldf#1{\directlua{_cldf_(#1)}} % global (functions) \def\cldn#1{\directlua{_cldn_(#1)}} % global (nodes) +\ifx\luafunction\undefined + \def\luafunction#1{\directlua{_cldl_(#1)}} +\fi + +\let\cldl\luafunction + +% \catcodetable\ctxcatcodes \catcode`^=\superscriptcatcode\catcode1=\activecatcode \global\let^^A=\cldf +% \catcodetable\ctxcatcodes \catcode`^=\superscriptcatcode\catcode2=\activecatcode \global\let^^B=\cldn + \normalprotected\def\cldprocessfile#1{\directlua{context.runfile("#1")}} \def\cldloadfile #1{\directlua{context.loadfile("#1")}} \def\cldcontext #1{\directlua{context(#1)}} diff --git a/tex/context/base/cldf-ver.lua b/tex/context/base/cldf-ver.lua index b48fd253a..66432eb1c 100644 --- a/tex/context/base/cldf-ver.lua +++ b/tex/context/base/cldf-ver.lua @@ -56,16 +56,18 @@ function context.tocontext(first,...) end end -function context.tobuffer(name,str) - context.startbuffer { name } - context.pushcatcodes("verbatim") - local lines = (type(str) == "string" and find(str,"[\n\r]") and splitlines(str)) or str - for i=1,#lines do - context(lines[i] .. " ") - end - context.stopbuffer() - context.popcatcodes() -end +-- function context.tobuffer(name,str) +-- context.startbuffer { name } +-- context.pushcatcodes("verbatim") +-- local lines = (type(str) == "string" and find(str,"[\n\r]") and splitlines(str)) or str +-- for i=1,#lines do +-- context(lines[i] .. " ") +-- end +-- context.stopbuffer() +-- context.popcatcodes() +-- end + +context.tobuffer = buffers.assign -- (name,str,catcodes) function context.tolines(str) local lines = type(str) == "string" and splitlines(str) or str diff --git a/tex/context/base/colo-ini.lua b/tex/context/base/colo-ini.lua index 535ee71b8..94e9e6615 100644 --- a/tex/context/base/colo-ini.lua +++ b/tex/context/base/colo-ini.lua @@ -65,24 +65,37 @@ function colors.setlist(name) return table.sortedkeys(name and name ~= "" and colorsets[name] or colorsets.default or {}) end +local context_colordefagc = context.colordefagc +local context_colordefagt = context.colordefagt +local context_colordefalc = context.colordefalc +local context_colordefalt = context.colordefalt +local context_colordeffgc = context.colordeffgc +local context_colordeffgt = context.colordeffgt +local context_colordefflc = context.colordefflc +local context_colordefflt = context.colordefflt +local context_colordefrgc = context.colordefrgc +local context_colordefrgt = context.colordefrgt +local context_colordefrlc = context.colordefrlc +local context_colordefrlt = context.colordefrlt + local function definecolor(name, ca, global) if ca and ca > 0 then if global then if trace_define then report_colors("define global color %a with attribute %a",name,ca) end - context.colordefagc(name,ca) + context_colordefagc(name,ca) else if trace_define then report_colors("define local color %a with attribute %a",name,ca) end - context.colordefalc(name,ca) + context_colordefalc(name,ca) end else if global then - context.colordefrgc(name) + context_colordefrgc(name) else - context.colordefrlc(name) + context_colordefrlc(name) end end colorset[name] = true-- maybe we can store more @@ -94,18 +107,18 @@ local function inheritcolor(name, ca, global) if trace_define then report_colors("inherit global color %a with attribute %a",name,ca) end - context.colordeffgc(name,ca) -- some day we will set the macro directly + context_colordeffgc(name,ca) -- some day we will set the macro directly else if trace_define then report_colors("inherit local color %a with attribute %a",name,ca) end - context.colordefflc(name,ca) + context_colordefflc(name,ca) end else if global then - context.colordefrgc(name) + context_colordefrgc(name) else - context.colordefrlc(name) + context_colordefrlc(name) end end colorset[name] = true-- maybe we can store more @@ -117,18 +130,18 @@ local function definetransparent(name, ta, global) if trace_define then report_colors("define global transparency %a with attribute %a",name,ta) end - context.colordefagt(name,ta) + context_colordefagt(name,ta) else if trace_define then report_colors("define local transparency %a with attribute %a",name,ta) end - context.colordefalt(name,ta) + context_colordefalt(name,ta) end else if global then - context.colordefrgt(name) + context_colordefrgt(name) else - context.colordefrlt(name) + context_colordefrlt(name) end end end @@ -139,18 +152,18 @@ local function inherittransparent(name, ta, global) if trace_define then report_colors("inherit global transparency %a with attribute %a",name,ta) end - context.colordeffgt(name,ta) + context_colordeffgt(name,ta) else if trace_define then report_colors("inherit local transparency %a with attribute %a",name,ta) end - context.colordefflt(name,ta) + context_colordefflt(name,ta) end else if global then - context.colordefrgt(name) + context_colordefrgt(name) else - context.colordefrlt(name) + context_colordefrlt(name) end end end @@ -382,7 +395,7 @@ function colors.isblack(ca) -- maybe commands end function colors.definespotcolor(name,parent,str,global) - if parent == "" or find(parent,"=") then + if parent == "" or find(parent,"=",1,true) then colors.registerspotcolor(name, parent) elseif name ~= parent then local cp = attributes_list[a_color][parent] diff --git a/tex/context/base/cont-new.mkiv b/tex/context/base/cont-new.mkiv index 733afc6d0..e3df6f7bf 100644 --- a/tex/context/base/cont-new.mkiv +++ b/tex/context/base/cont-new.mkiv @@ -11,7 +11,7 @@ %C therefore copyrighted by \PRAGMA. See mreadme.pdf for %C details. -\newcontextversion{2014.02.14 17:07} +\newcontextversion{2014.04.28 23:24} %D This file is loaded at runtime, thereby providing an excellent place for %D hacks, patches, extensions and new features. diff --git a/tex/context/base/context-version.pdf b/tex/context/base/context-version.pdf Binary files differindex 275625528..6450c43f1 100644 --- a/tex/context/base/context-version.pdf +++ b/tex/context/base/context-version.pdf diff --git a/tex/context/base/context.mkiv b/tex/context/base/context.mkiv index 8c67fbd50..e1ade2ba1 100644 --- a/tex/context/base/context.mkiv +++ b/tex/context/base/context.mkiv @@ -28,7 +28,7 @@ %D up and the dependencies are more consistent. \edef\contextformat {\jobname} -\edef\contextversion{2014.02.14 17:07} +\edef\contextversion{2014.04.28 23:24} \edef\contextkind {beta} %D For those who want to use this: @@ -234,7 +234,7 @@ \loadmarkfile{strc-xml} \loadmarkfile{strc-def} % might happen later \loadmkvifile{strc-ref} -\loadmarkfile{strc-reg} +%loadmarkfile{strc-reg} \loadmkvifile{strc-lev} % experiment \loadmarkfile{spac-ali} @@ -296,7 +296,9 @@ \loadmarkfile{pack-pos} \loadmkvifile{page-mak} -\loadmarkfile{page-lin} +\loadmarkfile{strc-reg} % uses mixed columns + +\loadmkvifile{page-lin} \loadmarkfile{page-par} \loadmarkfile{typo-pag} \loadmarkfile{typo-mar} diff --git a/tex/context/base/core-con.lua b/tex/context/base/core-con.lua index dad24a7d4..73ca3e304 100644 --- a/tex/context/base/core-con.lua +++ b/tex/context/base/core-con.lua @@ -14,14 +14,13 @@ slower but look nicer this way.</p> <p>Some code may move to a module in the language namespace.</p> --ldx]]-- -local command, context = commands, context - local floor, date, time, concat = math.floor, os.date, os.time, table.concat local lower, rep, match = string.lower, string.rep, string.match local utfchar, utfbyte = utf.char, utf.byte local tonumber, tostring = tonumber, tostring local context = context +local commands = commands local settings_to_array = utilities.parsers.settings_to_array local allocate = utilities.storage.allocate @@ -37,9 +36,8 @@ local languages = languages converters.number = tonumber converters.numbers = tonumber -function commands.number(n) context(n) end - -commands.numbers = commands.number +commands.number = context +commands.numbers = context -- to be reconsidered ... languages namespace here, might become local plus a register command diff --git a/tex/context/base/core-env.mkiv b/tex/context/base/core-env.mkiv index 1c92a371c..ca134e230 100644 --- a/tex/context/base/core-env.mkiv +++ b/tex/context/base/core-env.mkiv @@ -232,21 +232,21 @@ %D \starttyping %D \enablemode[two] %D -%D \startmodes +%D \startmodeset %D [one] {1} %D [two] {2} %D [two] {2} %D [three] {3} %D [default] {?} -%D \stopmodes +%D \stopmodeset %D -%D \startmodes +%D \startmodeset %D [one] {1} %D [three] {3} %D [default] {?} -%D \stopmodes +%D \stopmodeset %D -%D \startmodes +%D \startmodeset %D [one] { %D \input tufte %D } @@ -265,7 +265,7 @@ %D [default] { %D \input ward %D } -%D \stopmodes +%D \stopmodeset %D \stoptyping \newconditional\c_syst_modes_set_done % conditionals can be pushed/popped @@ -273,7 +273,7 @@ \unexpanded\def\startmodeset {\pushmacro\c_syst_modes_set_done \setfalse\conditionalfalse - \doifnextoptionalelse\syst_modes_set_start\syst_modes_set_quit} + \doifnextoptionalcselse\syst_modes_set_start\syst_modes_set_quit} \def\syst_modes_set_start[#1]% {\edef\m_mode_case{#1}% @@ -293,10 +293,10 @@ \def\syst_modes_set_yes#1% {\settrue\c_syst_modes_set_done #1% - \doifnextoptionalelse\syst_modes_set_start\syst_modes_set_quit} + \doifnextoptionalcselse\syst_modes_set_start\syst_modes_set_quit} \def\syst_modes_set_nop#1% - {\doifnextoptionalelse\syst_modes_set_start\syst_modes_set_quit} + {\doifnextoptionalcselse\syst_modes_set_start\syst_modes_set_quit} \def\syst_modes_set_quit#1\stopmodeset {\popmacro\c_syst_modes_set_done} @@ -316,7 +316,7 @@ \expanded % will become obsolete {\def\expandafter\noexpand\csname\e!start\v!setups\endcsname - {\begingroup\noexpand\doifnextoptionalelse + {\begingroup\noexpand\doifnextoptionalcselse {\noexpand\dostartsetupsA\expandafter\noexpand\csname\e!stop\v!setups\endcsname} {\noexpand\dostartsetupsB\expandafter\noexpand\csname\e!stop\v!setups\endcsname}}} @@ -467,11 +467,11 @@ % Is doglobal still relevant? Maybe always global? Or never? Anyway, it will become obsolete. -\unexpanded\def\startluasetups {\begingroup\doifnextoptionalelse\syst_setups_start_lua_a\syst_setups_start_lua_b} -\unexpanded\def\startxmlsetups {\begingroup\doifnextoptionalelse\syst_setups_start_xml_a\syst_setups_start_xml_b} -\unexpanded\def\startrawsetups {\begingroup\doifnextoptionalelse\syst_setups_start_raw_a\syst_setups_start_raw_b} -\unexpanded\def\startlocalsetups{\begingroup\doifnextoptionalelse\syst_setups_start_loc_a\syst_setups_start_loc_b} -\unexpanded\def\startsetups {\begingroup\doifnextoptionalelse\syst_setups_start_tex_a\syst_setups_start_tex_b} +\unexpanded\def\startluasetups {\begingroup\doifnextoptionalcselse\syst_setups_start_lua_a\syst_setups_start_lua_b} +\unexpanded\def\startxmlsetups {\begingroup\doifnextoptionalcselse\syst_setups_start_xml_a\syst_setups_start_xml_b} +\unexpanded\def\startrawsetups {\begingroup\doifnextoptionalcselse\syst_setups_start_raw_a\syst_setups_start_raw_b} +\unexpanded\def\startlocalsetups{\begingroup\doifnextoptionalcselse\syst_setups_start_loc_a\syst_setups_start_loc_b} +\unexpanded\def\startsetups {\begingroup\doifnextoptionalcselse\syst_setups_start_tex_a\syst_setups_start_tex_b} \let\stopluasetups \relax \let\stopxmlsetups \relax diff --git a/tex/context/base/core-ini.mkiv b/tex/context/base/core-ini.mkiv index 1682bed1b..711c43f94 100644 --- a/tex/context/base/core-ini.mkiv +++ b/tex/context/base/core-ini.mkiv @@ -58,14 +58,21 @@ \newtoks \everyforgetall \newtoks \everycleanupfeatures \newtoks \everysimplifycommands +\newtoks \everypreroll \let\simplifiedcommands\everysimplifycommands % backward compatible, will stay as it's used in styles +\newconditional\simplifyingcommands % public + \unexpanded\def\forgetall {\the\everyforgetall} \unexpanded\def\cleanupfeatures {\the\everycleanupfeatures} \unexpanded\def\simplifycommands{\the\everysimplifycommands} \appendtoks + \settrue\simplifyingcommands +\to \everysimplifycommands + +\appendtoks \everypar\emptytoks % pretty important \to \everyforgetall diff --git a/tex/context/base/core-sys.lua b/tex/context/base/core-sys.lua index 009ec16ea..22b0e457c 100644 --- a/tex/context/base/core-sys.lua +++ b/tex/context/base/core-sys.lua @@ -94,7 +94,7 @@ statistics.register("result saved in file", function() -- suffix will be fetched from backend local outputfilename = environment.outputfilename or environment.jobname or tex.jobname or "<unset>" if tex.pdfoutput > 0 then - return format("%s.%s, compresslevel %s, objectcompreslevel %s",outputfilename,"pdf",tex.pdfcompresslevel, tex.pdfobjcompresslevel) + return format("%s.%s, compresslevel %s, objectcompresslevel %s",outputfilename,"pdf",tex.pdfcompresslevel, tex.pdfobjcompresslevel) else return format("%s.%s",outputfilename,"dvi") -- hard to imagine end diff --git a/tex/context/base/core-uti.lua b/tex/context/base/core-uti.lua index 71b80170c..1903ad823 100644 --- a/tex/context/base/core-uti.lua +++ b/tex/context/base/core-uti.lua @@ -36,7 +36,7 @@ local report_passes = logs.reporter("job","passes") job = job or { } local job = job -job.version = 1.24 +job.version = 1.25 job.packversion = 1.02 -- some day we will implement loading of other jobs and then we need @@ -51,7 +51,13 @@ directly access the variable using a <l n='lua'/> call.</p> local savelist, comment = { }, { } function job.comment(key,value) - comment[key] = value + if type(key) == "table" then + for k, v in next, key do + comment[k] = v + end + else + comment[key] = value + end end job.comment("version",job.version) @@ -73,8 +79,8 @@ function job.initialize(loadname,savename) end) end -function job.register(collected, tobesaved, initializer, finalizer) - savelist[#savelist+1] = { collected, tobesaved, initializer, finalizer } +function job.register(collected, tobesaved, initializer, finalizer, serializer) + savelist[#savelist+1] = { collected, tobesaved, initializer, finalizer, serializer } end -- as an example we implement variables @@ -100,7 +106,7 @@ job.register('job.variables.checksums', 'job.variables.checksums', initializer) local rmethod, rvalue -local setxvalue = context.setxvalue +local ctx_setxvalue = context.setxvalue local function initializer() tobesaved = jobvariables.tobesaved @@ -116,7 +122,7 @@ local function initializer() end tobesaved.randomseed = rvalue for cs, value in next, collected do - setxvalue(cs,value) + ctx_setxvalue(cs,value) end end @@ -175,10 +181,12 @@ function job.save(filename) -- we could return a table but it can get pretty lar f:write("local utilitydata = { }\n\n") f:write(serialize(comment,"utilitydata.comment",true),"\n\n") for l=1,#savelist do - local list = savelist[l] - local target = format("utilitydata.%s",list[1]) - local data = list[2] - local finalizer = list[4] + -- f:write("do\n\n") -- no solution for the jit limitatione either + local list = savelist[l] + local target = format("utilitydata.%s",list[1]) + local data = list[2] + local finalizer = list[4] + local serializer = list[5] if type(data) == "string" then data = utilities.tables.accesstable(data) end @@ -189,11 +197,18 @@ function job.save(filename) -- we could return a table but it can get pretty lar packers.pack(data,jobpacker,true) end local definer, name = definetable(target,true,true) -- no first and no last - f:write(definer,"\n\n",serialize(data,name,true),"\n\n") + if serializer then + f:write(definer,"\n\n",serializer(data,name,true),"\n\n") + else + f:write(definer,"\n\n",serialize(data,name,true),"\n\n") + end + -- f:write("end\n\n") end if job.pack then packers.strip(jobpacker) + -- f:write("do\n\n") f:write(serialize(jobpacker,"utilitydata.job.packed",true),"\n\n") + -- f:write("end\n\n") end f:write("return utilitydata") f:close() @@ -214,8 +229,9 @@ local function load(filename) return data end else - os.remove(filename) -- probably a bad file - report_passes("removing stale job data file %a, restart job",filename) + os.remove(filename) -- probably a bad file (or luajit overflow as it cannot handle large tables well) + report_passes("removing stale job data file %a, restart job, message: %s%s",filename,tostring(data), + jit and " (try luatex instead of luajittex)" or "") os.exit(true) -- trigger second run end end @@ -323,16 +339,19 @@ function statistics.formatruntime(runtime) if shipped > 0 or pages > 0 then local persecond = shipped / runtime if pages == 0 then pages = shipped end -if jit then -local saved = watts_per_core * runtime * kg_per_watt_per_second / speedup_by_other_engine -local saved = used_wood_factor * runtime --- return format("%s seconds, %i processed pages, %i shipped pages, %.3f pages/second, %f kg tree saved by using luajittex",runtime,pages,shipped,persecond,saved) - return format("%s seconds, %i processed pages, %i shipped pages, %.3f pages/second, %f mg tree saved by using luajittex",runtime,pages,shipped,persecond,saved*1000*1000) -else - return format("%s seconds, %i processed pages, %i shipped pages, %.3f pages/second",runtime,pages,shipped,persecond) -end + -- if jit then + -- local saved = watts_per_core * runtime * kg_per_watt_per_second / speedup_by_other_engine + -- local saved = used_wood_factor * runtime + -- return format("%s seconds, %i processed pages, %i shipped pages, %.3f pages/second, %f mg tree saved by using luajittex",runtime,pages,shipped,persecond,saved*1000*1000) + -- else + return format("%s seconds, %i processed pages, %i shipped pages, %.3f pages/second",runtime,pages,shipped,persecond) + -- end else return format("%s seconds",runtime) end end end + + +commands.savevariable = job.variables.save +commands.setjobcomment = job.comment diff --git a/tex/context/base/core-uti.mkiv b/tex/context/base/core-uti.mkiv index 527b90445..5937240b9 100644 --- a/tex/context/base/core-uti.mkiv +++ b/tex/context/base/core-uti.mkiv @@ -18,21 +18,19 @@ \registerctxluafile{core-uti}{1.001} \def\savecurrentvalue#1#2% immediate, so not \unexpanded - {\ctxlua{job.variables.save("\strippedcsname#1","#2")}} + {\ctxcommand{savevariable("\strippedcsname#1","#2")}} \appendtoks - \ctxlua { - % job.comment("file","\jobname") - job.comment("file",tex.jobname) - job.comment("format","\contextformat") - job.comment("stamp","\contextversion") - job.comment("escape","\!!bs\space...\space\!!es") - }% + \ctxlua{job.comment{ + file = tex.jobname, + format = "\contextformat", + stamp = "\contextversion", + escape = "\!!bs\space...\space\!!es" + }}% \to \everystarttext \appendtoks \ctxlua { - % job.initialize("\jobname.tuc","\jobname.tua") job.initialize(tex.jobname .. ".tuc",tex.jobname .. ".tua") }% \to \everyjob diff --git a/tex/context/base/data-exp.lua b/tex/context/base/data-exp.lua index c67e97bb1..9534e73a0 100644 --- a/tex/context/base/data-exp.lua +++ b/tex/context/base/data-exp.lua @@ -123,7 +123,7 @@ local function splitpathexpr(str, newlist, validate) -- I couldn't resist lpeggi local old = str str = lpegmatch(l_rest, str) until old == str - until old == str -- or not find(str,"{") + until old == str -- or not find(str,"{",1,true) str = lpegmatch(stripper_1,str) if validate then for s in gmatch(str,"[^,]+") do @@ -191,7 +191,7 @@ function resolvers.cleanpath(str) -- tricky, maybe only simple paths report_expansions("no home dir set, ignoring dependent paths") end function resolvers.cleanpath(str) - if not str or find(str,"~") then + if not str or find(str,"~",1,true) then return "" -- special case else return lpegmatch(cleanup,str) diff --git a/tex/context/base/data-ini.lua b/tex/context/base/data-ini.lua index 201c6a2d7..bbd233ae7 100644 --- a/tex/context/base/data-ini.lua +++ b/tex/context/base/data-ini.lua @@ -217,7 +217,7 @@ end environment.texroot = file.collapsepath(texroot) -if profiler then +if type(profiler) == "table" and not jit then directives.register("system.profile",function() profiler.start("luatex-profile.log") end) diff --git a/tex/context/base/data-met.lua b/tex/context/base/data-met.lua index ee9de3fd9..67b9eb22b 100644 --- a/tex/context/base/data-met.lua +++ b/tex/context/base/data-met.lua @@ -38,7 +38,7 @@ local function splitmethod(filename) -- todo: filetype in specification -- filename = gsub(filename,"^%./",getcurrentdir().."/") -- we will merge dir.expandname and collapse some day - if not find(filename,"://") then + if not find(filename,"://",1,true) then return { scheme = "file", path = filename, original = filename, filename = filename } end local specification = url.hashed(filename) diff --git a/tex/context/base/data-res.lua b/tex/context/base/data-res.lua index 64c38f82c..8e2a4978a 100644 --- a/tex/context/base/data-res.lua +++ b/tex/context/base/data-res.lua @@ -359,7 +359,7 @@ local function identify_configuration_files() -- todo: environment.skipweirdcnfpaths directive if trace_locating then local fullpath = gsub(resolvers.resolve(collapsepath(filepath)),"//","/") - local weirdpath = find(fullpath,"/texmf.+/texmf") or not find(fullpath,"/web2c") + local weirdpath = find(fullpath,"/texmf.+/texmf") or not find(fullpath,"/web2c",1,true) report_resolving("looking for %a on %s path %a from specification %a",luacnfname,weirdpath and "weird" or "given",fullpath,filepath) end if lfs.isfile(realname) then @@ -1027,7 +1027,7 @@ local function find_direct(filename,allresults) end local function find_wildcard(filename,allresults) - if find(filename,'%*') then + if find(filename,'*',1,true) then if trace_locating then report_resolving("checking wildcard %a", filename) end @@ -1204,7 +1204,7 @@ local function find_intree(filename,filetype,wantedfiles,allresults) local scheme = url.hasscheme(pathname) if not scheme or scheme == "file" then local pname = gsub(pathname,"%.%*$",'') - if not find(pname,"%*") then + if not find(pname,"*",1,true) then if can_be_dir(pname) then -- quick root scan first for k=1,#wantedfiles do @@ -1510,7 +1510,7 @@ local function findwildcardfiles(filename,allresults,result) -- todo: remap: and local path = lower(lpegmatch(makewildcard,dirn) or dirn) local name = lower(lpegmatch(makewildcard,base) or base) local files, done = instance.files, false - if find(name,"%*") then + if find(name,"*",1,true) then local hashes = instance.hashes for k=1,#hashes do local hash = hashes[k] diff --git a/tex/context/base/data-sch.lua b/tex/context/base/data-sch.lua index 41b941c5a..adc774489 100644 --- a/tex/context/base/data-sch.lua +++ b/tex/context/base/data-sch.lua @@ -54,7 +54,7 @@ end local cached, loaded, reused, thresholds, handlers = { }, { }, { }, { }, { } local function runcurl(name,cachename) -- we use sockets instead or the curl library when possible - local command = "curl --silent --create-dirs --output " .. cachename .. " " .. name + local command = "curl --silent --insecure --create-dirs --output " .. cachename .. " " .. name os.spawn(command) end diff --git a/tex/context/base/file-ini.lua b/tex/context/base/file-ini.lua index 2bc742a1f..3314bb33d 100644 --- a/tex/context/base/file-ini.lua +++ b/tex/context/base/file-ini.lua @@ -11,27 +11,29 @@ if not modules then modules = { } end modules ['file-ini'] = { <l n='tex'/>. These methods have counterparts at the <l n='tex'/> end.</p> --ldx]]-- -resolvers.jobs = resolvers.jobs or { } +resolvers.jobs = resolvers.jobs or { } -local texsetcount = tex.setcount -local setvalue = context.setvalue +local texsetcount = tex.setcount + +local context_setvalue = context.setvalue +local commands_doifelse = commands.doifelse function commands.splitfilename(fullname) local t = file.nametotable(fullname) local path = t.path texsetcount("splitoffkind",(path == "" and 0) or (path == '.' and 1) or 2) - setvalue("splitofffull",fullname) - setvalue("splitoffpath",path) - setvalue("splitoffname",t.name) - setvalue("splitoffbase",t.base) - setvalue("splitofftype",t.suffix) + context_setvalue("splitofffull",fullname) + context_setvalue("splitoffpath",path) + context_setvalue("splitoffname",t.name) + context_setvalue("splitoffbase",t.base) + context_setvalue("splitofftype",t.suffix) end function commands.doifparentfileelse(n) - commands.doifelse(n == environment.jobname or n == environment.jobname .. '.tex' or n == environment.outputfilename) + commands_doifelse(n == environment.jobname or n == environment.jobname .. '.tex' or n == environment.outputfilename) end function commands.doiffileexistelse(name) local foundname = resolvers.findtexfile(name) - commands.doifelse(foundname and foundname ~= "") + commands_doifelse(foundname and foundname ~= "") end diff --git a/tex/context/base/file-job.mkvi b/tex/context/base/file-job.mkvi index fa395a32e..5f646ed28 100644 --- a/tex/context/base/file-job.mkvi +++ b/tex/context/base/file-job.mkvi @@ -129,20 +129,20 @@ \unexpanded\def\processfileonce#name{\ctxcommand{processfileonce("#name")}} \unexpanded\def\processfilenone#name{\ctxcommand{processfilenone("#name")}} -\unexpanded\def\project {\doifnextoptionalelse\useproject \syst_structure_arg_project} -\unexpanded\def\product {\doifnextoptionalelse\useproduct \syst_structure_arg_product} -\unexpanded\def\component {\doifnextoptionalelse\usecomponent \syst_structure_arg_component} -\unexpanded\def\environment{\doifnextoptionalelse\useenvironment\syst_structure_arg_environment} +\unexpanded\def\project {\doifnextoptionalcselse\useproject \syst_structure_arg_project} +\unexpanded\def\product {\doifnextoptionalcselse\useproduct \syst_structure_arg_product} +\unexpanded\def\component {\doifnextoptionalcselse\usecomponent \syst_structure_arg_component} +\unexpanded\def\environment{\doifnextoptionalcselse\useenvironment\syst_structure_arg_environment} \def\syst_structure_arg_project #name {\ctxcommand{useproject ("#name")}} \def\syst_structure_arg_product #name {\ctxcommand{useproduct ("#name")}} \def\syst_structure_arg_component #name {\ctxcommand{usecomponent ("#name")}} \def\syst_structure_arg_environment#name {\ctxcommand{useenvironment("#name")}} -\unexpanded\def\startproject {\doifnextoptionalelse\syst_structure_start_opt_project \syst_structure_start_arg_project } -\unexpanded\def\startproduct {\doifnextoptionalelse\syst_structure_start_opt_product \syst_structure_start_arg_product } -\unexpanded\def\startcomponent {\doifnextoptionalelse\syst_structure_start_opt_component \syst_structure_start_arg_component } -\unexpanded\def\startenvironment{\doifnextoptionalelse\syst_structure_start_opt_environment\syst_structure_start_arg_environment} +\unexpanded\def\startproject {\doifnextoptionalcselse\syst_structure_start_opt_project \syst_structure_start_arg_project } +\unexpanded\def\startproduct {\doifnextoptionalcselse\syst_structure_start_opt_product \syst_structure_start_arg_product } +\unexpanded\def\startcomponent {\doifnextoptionalcselse\syst_structure_start_opt_component \syst_structure_start_arg_component } +\unexpanded\def\startenvironment{\doifnextoptionalcselse\syst_structure_start_opt_environment\syst_structure_start_arg_environment} \def\syst_structure_start_arg_project #name {\ctxcommand{startproject ("#name")}} \def\syst_structure_start_arg_product #name {\ctxcommand{startproduct ("#name")}} @@ -216,7 +216,7 @@ % {\letvalue{\e!stop\v!localenvironment}\relax} % {\grabuntil{\e!stop\v!localenvironment}\gobbleoneargument}} % -% \setvalue{\v!localenvironment}{\doifnextoptionalelse\uselocalenvironment\redolocalenvironment} +% \setvalue{\v!localenvironment}{\doifnextoptionalcselse\uselocalenvironment\redolocalenvironment} % % \def\redolocalenvironment#1 {\uselocalenvironment[#1]} % \def\uselocalenvironment[#1]{\doexecutefileonce{#1}} diff --git a/tex/context/base/file-mod.mkvi b/tex/context/base/file-mod.mkvi index 00966a442..1591a69cc 100644 --- a/tex/context/base/file-mod.mkvi +++ b/tex/context/base/file-mod.mkvi @@ -107,7 +107,7 @@ \newtoks\everysetupmodule \unexpanded\def\startmodule - {\doifnextoptionalelse\syst_modules_start_yes\syst_modules_start_nop} + {\doifnextoptionalcselse\syst_modules_start_yes\syst_modules_start_nop} \def\syst_modules_start_yes[#name]% {\pushmacro\currentmodule diff --git a/tex/context/base/file-res.lua b/tex/context/base/file-res.lua index 8a50c0d58..9ae7a6b06 100644 --- a/tex/context/base/file-res.lua +++ b/tex/context/base/file-res.lua @@ -136,7 +136,7 @@ function getreadfilename(scheme,path,name) -- better do a split and then pass ta if hasscheme(name) or is_qualified_path(name) then fullname = name else - if not find(name,"%%") then + if not find(name,"%",1,true) then name = urlescape(name) -- if no % in names end fullname = ((path == "") and format("%s:///%s",scheme,name)) or format("%s:///%s/%s",scheme,path,name) diff --git a/tex/context/base/font-afm.lua b/tex/context/base/font-afm.lua index adb4281b2..46ea8a423 100644 --- a/tex/context/base/font-afm.lua +++ b/tex/context/base/font-afm.lua @@ -15,6 +15,14 @@ n='otf'/>.</p> <p>The following code still has traces of intermediate font support where we handles font encodings. Eventually font encoding goes away.</p> + +<p>The embedding of a font involves creating temporary files and +depending on your system setup that can fail. It took more than a +day to figure out why sometimes embedding failed in mingw luatex +where running on a real path like c:\... failed while running on +say e:\... being a link worked well. The native windows binaries +don't have this issue.</p> + --ldx]]-- local fonts, logs, trackers, containers, resolvers = fonts, logs, trackers, containers, resolvers @@ -221,6 +229,7 @@ local function get_indexes(data,pfbname) report_afm("getting index data from %a",pfbname) end for index, glyph in next, glyphs do + -- for index, glyph in table.sortedhash(glyphs) do local name = glyph.name if name then local char = characters[name] @@ -336,6 +345,7 @@ function afm.load(filename) get_indexes(data,pfbname) elseif trace_loading then report_afm("no pfb file for %a",filename) + -- data.resources.filename = "unset" -- better than loading the afm file end report_afm("unifying %a",filename) unify(data,filename) @@ -410,7 +420,7 @@ unify = function(data, filename) if unicode then krn[unicode] = kern else - print(unicode,name) + -- print(unicode,name) end end description.kerns = krn diff --git a/tex/context/base/font-chk.lua b/tex/context/base/font-chk.lua index 5d4f6059b..591d59d65 100644 --- a/tex/context/base/font-chk.lua +++ b/tex/context/base/font-chk.lua @@ -9,6 +9,8 @@ if not modules then modules = { } end modules ['font-chk'] = { -- possible optimization: delayed initialization of vectors -- move to the nodes namespace +local next = next + local formatters = string.formatters local bpfactor = number.dimenfactors.bp local fastcopy = table.fastcopy @@ -32,6 +34,8 @@ local getprivatenode = helpers.getprivatenode local otffeatures = fonts.constructors.newfeatures("otf") local registerotffeature = otffeatures.register +local afmfeatures = fonts.constructors.newfeatures("afm") +local registerafmfeature = afmfeatures.register local is_character = characters.is_character local chardata = characters.data @@ -159,7 +163,7 @@ local variants = { { tag = "yellow", r = .6, g = .6, b = 0 }, } -local pdf_blob = "pdf: q %0.6f 0 0 %0.6f 0 0 cm %s %s %s rg %s %s %s RG 10 M 1 j 1 J 0.05 w %s Q" +local pdf_blob = "pdf: q %0.6F 0 0 %0.6F 0 0 cm %s %s %s rg %s %s %s RG 10 M 1 j 1 J 0.05 w %s Q" local cache = { } -- saves some tables but not that impressive @@ -403,3 +407,46 @@ local function expandglyph(characters,index,done) end helpers.expandglyph = expandglyph + +-- should not be needed as we add .notdef in the engine + +local dummyzero = { + -- width = 0, + -- height = 0, + -- depth = 0, + commands = { { "special", "" } }, +} + +local function adddummysymbols(tfmdata,...) + local characters = tfmdata.characters + if not characters[0] then + characters[0] = dummyzero + end + -- if not characters[1] then + -- characters[1] = dummyzero -- test only + -- end +end + +registerotffeature { + name = "dummies", + description = "dummy symbols", + default = true, + manipulators = { + base = adddummysymbols, + node = adddummysymbols, + } +} + +registerafmfeature { + name = "dummies", + description = "dummy symbols", + default = true, + manipulators = { + base = adddummysymbols, + node = adddummysymbols, + } +} + +-- callback.register("char_exists",function(f,c) -- to slow anyway as called often so we should flag in tfmdata +-- return true +-- end) diff --git a/tex/context/base/font-chk.mkiv b/tex/context/base/font-chk.mkiv index d436388de..4572041c2 100644 --- a/tex/context/base/font-chk.mkiv +++ b/tex/context/base/font-chk.mkiv @@ -15,6 +15,15 @@ \registerctxluafile{font-chk}{1.001} +\tracinglostchars\zerocount + +% Use this instead: +% +% \definefontfeature[default][default][missing=yes] +% \enabletrackers[fonts.missing=replace] +% +% or better: + \unexpanded\def\checkcharactersinfont {\ctxcommand{checkcharactersinfont()}} \unexpanded\def\removemissingcharacters {\ctxcommand{removemissingcharacters()}} \unexpanded\def\replacemissingcharacters{\ctxcommand{replacemissingcharacters()}} diff --git a/tex/context/base/font-con.lua b/tex/context/base/font-con.lua index 09293895e..b43961ec6 100644 --- a/tex/context/base/font-con.lua +++ b/tex/context/base/font-con.lua @@ -290,14 +290,15 @@ constructors.nofsharedfonts = 0 local sharednames = { } function constructors.trytosharefont(target,tfmdata) - if constructors.sharefonts then + if constructors.sharefonts then -- not robust ! local characters = target.characters local n = 1 local t = { target.psname } local u = sortedkeys(characters) for i=1,#u do + local k = u[i] n = n + 1 ; t[n] = k - n = n + 1 ; t[n] = characters[u[i]].index or k + n = n + 1 ; t[n] = characters[k].index or k end local h = md5.HEX(concat(t," ")) local s = sharednames[h] @@ -452,8 +453,6 @@ function constructors.scale(tfmdata,specification) target.psname = psname target.name = name -- - -- inspect(properties) - -- properties.fontname = fontname properties.fullname = fullname properties.filename = filename @@ -826,7 +825,6 @@ function constructors.scale(tfmdata,specification) end targetcharacters[unicode] = chr end - -- constructors.aftercopyingcharacters(target,tfmdata) -- @@ -965,6 +963,7 @@ function constructors.finalize(tfmdata) -- properties.finalized = true -- + -- return tfmdata end diff --git a/tex/context/base/font-ctx.lua b/tex/context/base/font-ctx.lua index e251cc9c1..2bfcf3859 100644 --- a/tex/context/base/font-ctx.lua +++ b/tex/context/base/font-ctx.lua @@ -61,11 +61,12 @@ local nuts = nodes.nuts local tonut = nuts.tonut local getfield = nuts.getfield -local getattr = nuts.getattr -local getfont = nuts.getfont - local setfield = nuts.setfield +local getattr = nuts.getattr local setattr = nuts.setattr +local getprop = nuts.getprop +local setprop = nuts.setprop +local getfont = nuts.getfont local texgetattribute = tex.getattribute local texsetattribute = tex.setattribute @@ -137,8 +138,17 @@ function fonts.helpers.name(tfmdata) return file.basename(type(tfmdata) == "number" and properties[tfmdata].name or tfmdata.properties.name) end -utilities.strings.formatters.add(formatters,"font:name", [["'"..fontname(%s).."'"]], { fontname = fonts.helpers.name }) -utilities.strings.formatters.add(formatters,"font:features",[["'"..sequenced(%s," ",true).."'"]], { sequenced = table.sequenced }) +if _LUAVERSION < 5.2 then + + utilities.strings.formatters.add(formatters,"font:name", [["'"..fontname(%s).."'"]], "local fontname = fonts.helpers.name") + utilities.strings.formatters.add(formatters,"font:features",[["'"..sequenced(%s," ",true).."'"]],"local sequenced = table.sequenced") + +else + + utilities.strings.formatters.add(formatters,"font:name", [["'"..fontname(%s).."'"]], { fontname = fonts.helpers.name }) + utilities.strings.formatters.add(formatters,"font:features",[["'"..sequenced(%s," ",true).."'"]],{ sequenced = table.sequenced }) + +end -- ... like font-sfm or so @@ -155,47 +165,50 @@ local hashes = { } function constructors.trytosharefont(target,tfmdata) constructors.noffontsloaded = constructors.noffontsloaded + 1 if constructors.sharefonts then - local properties = target.properties - local fullname = target.fullname local fonthash = target.specification.hash - local sharedname = hashes[fonthash] - if sharedname then - -- this is ok for context as we know that only features can mess with font definitions - -- so a similar hash means that the fonts are similar too - if trace_defining then - report_defining("font %a uses backend resources of font %a (%s)",target.fullname,sharedname,"common hash") - end - target.fullname = sharedname - properties.sharedwith = sharedname - constructors.nofsharedfonts = constructors.nofsharedfonts + 1 - constructors.nofsharedhashes = constructors.nofsharedhashes + 1 - else - -- the one takes more time (in the worst case of many cjk fonts) but it also saves - -- embedding time - local characters = target.characters - local n = 1 - local t = { target.psname } - local u = sortedkeys(characters) - for i=1,#u do - n = n + 1 ; t[n] = k - n = n + 1 ; t[n] = characters[u[i]].index or k - end - local checksum = md5.HEX(concat(t," ")) - local sharedname = shares[checksum] + if fonthash then + local properties = target.properties local fullname = target.fullname + local sharedname = hashes[fonthash] if sharedname then + -- this is ok for context as we know that only features can mess with font definitions + -- so a similar hash means that the fonts are similar too if trace_defining then - report_defining("font %a uses backend resources of font %a (%s)",fullname,sharedname,"common vector") + report_defining("font %a uses backend resources of font %a (%s)",target.fullname,sharedname,"common hash") end - fullname = sharedname - properties.sharedwith= sharedname + target.fullname = sharedname + properties.sharedwith = sharedname constructors.nofsharedfonts = constructors.nofsharedfonts + 1 - constructors.nofsharedvectors = constructors.nofsharedvectors + 1 + constructors.nofsharedhashes = constructors.nofsharedhashes + 1 else - shares[checksum] = fullname + -- the one takes more time (in the worst case of many cjk fonts) but it also saves + -- embedding time + local characters = target.characters + local n = 1 + local t = { target.psname } + local u = sortedkeys(characters) + for i=1,#u do + local k = u[i] + n = n + 1 ; t[n] = k + n = n + 1 ; t[n] = characters[k].index or k + end + local checksum = md5.HEX(concat(t," ")) + local sharedname = shares[checksum] + local fullname = target.fullname + if sharedname then + if trace_defining then + report_defining("font %a uses backend resources of font %a (%s)",fullname,sharedname,"common vector") + end + fullname = sharedname + properties.sharedwith= sharedname + constructors.nofsharedfonts = constructors.nofsharedfonts + 1 + constructors.nofsharedvectors = constructors.nofsharedvectors + 1 + else + shares[checksum] = fullname + end + target.fullname = fullname + hashes[fonthash] = fullname end - target.fullname = fullname - hashes[fonthash] = fullname end end end @@ -493,7 +506,7 @@ local function definecontext(name,t) -- can be shared end local function presetcontext(name,parent,features) -- will go to con and shared - if features == "" and find(parent,"=") then + if features == "" and find(parent,"=",1,true) then features = parent parent = "" end @@ -810,7 +823,7 @@ local function splitcontext(features) -- presetcontext creates dummy here local sf = setups[features] if not sf then local n -- number - if find(features,",") then + if find(features,",",a,true) then -- let's assume a combination which is not yet defined but just specified (as in math) n, sf = presetcontext(features,features,"") else @@ -827,13 +840,13 @@ end -- local setup = setups[features] -- if setup then -- return setup --- elseif find(features,",") then +-- elseif find(features,",",1,true) then -- -- This is not that efficient but handy anyway for quick and dirty tests -- -- beware, due to the way of caching setups you can get the wrong results -- -- when components change. A safeguard is to nil the cache. -- local merge = nil -- for feature in gmatch(features,"[^, ]+") do --- if find(feature,"=") then +-- if find(feature,"=",1,true) then -- local k, v = lpegmatch(splitter,feature) -- if k and v then -- if not merge then @@ -941,319 +954,327 @@ local getspecification = definers.getspecification -- we can make helper macros which saves parsing (but normaly not -- that many calls, e.g. in mk a couple of 100 and in metafun 3500) -local setdefaultfontname = context.fntsetdefname -local setsomefontname = context.fntsetsomename -local setemptyfontsize = context.fntsetnopsize -local setsomefontsize = context.fntsetsomesize -local letvaluerelax = context.letvaluerelax - -function commands.definefont_one(str) - statistics.starttiming(fonts) - if trace_defining then - report_defining("memory usage before: %s",statistics.memused()) - report_defining("start stage one: %s",str) - end - local fullname, size = lpegmatch(splitpattern,str) - local lookup, name, sub, method, detail = getspecification(fullname) - if not name then - report_defining("strange definition %a",str) - setdefaultfontname() - elseif name == "unknown" then - setdefaultfontname() - else - setsomefontname(name) - end - -- we can also use a count for the size - if size and size ~= "" then - local mode, size = lpegmatch(sizepattern,size) - if size and mode then - texsetcount("scaledfontmode",mode) - setsomefontsize(size) +do -- else too many locals + + local ctx_setdefaultfontname = context.fntsetdefname + local ctx_setsomefontname = context.fntsetsomename + local ctx_setemptyfontsize = context.fntsetnopsize + local ctx_setsomefontsize = context.fntsetsomesize + local ctx_letvaluerelax = context.letvaluerelax + + function commands.definefont_one(str) + statistics.starttiming(fonts) + if trace_defining then + report_defining("memory usage before: %s",statistics.memused()) + report_defining("start stage one: %s",str) + end + local fullname, size = lpegmatch(splitpattern,str) + local lookup, name, sub, method, detail = getspecification(fullname) + if not name then + report_defining("strange definition %a",str) + ctx_setdefaultfontname() + elseif name == "unknown" then + ctx_setdefaultfontname() + else + ctx_setsomefontname(name) + end + -- we can also use a count for the size + if size and size ~= "" then + local mode, size = lpegmatch(sizepattern,size) + if size and mode then + texsetcount("scaledfontmode",mode) + ctx_setsomefontsize(size) + else + texsetcount("scaledfontmode",0) + ctx_setemptyfontsize() + end + elseif true then + -- so we don't need to check in tex + texsetcount("scaledfontmode",2) + ctx_setemptyfontsize() else texsetcount("scaledfontmode",0) - setemptyfontsize() + ctx_setemptyfontsize() + end + specification = definers.makespecification(str,lookup,name,sub,method,detail,size) + if trace_defining then + report_defining("stop stage one") end - elseif true then - -- so we don't need to check in tex - texsetcount("scaledfontmode",2) - setemptyfontsize() - else - texsetcount("scaledfontmode",0) - setemptyfontsize() - end - specification = definers.makespecification(str,lookup,name,sub,method,detail,size) - if trace_defining then - report_defining("stop stage one") end -end -local n = 0 - --- we can also move rscale to here (more consistent) --- the argument list will become a table + local n = 0 -local function nice_cs(cs) - return (gsub(cs,".->", "")) -end + -- we can also move rscale to here (more consistent) + -- the argument list will become a table -function commands.definefont_two(global,cs,str,size,inheritancemode,classfeatures,fontfeatures,classfallbacks,fontfallbacks, - mathsize,textsize,relativeid,classgoodies,goodies,classdesignsize,fontdesignsize,scaledfontmode) - if trace_defining then - report_defining("start stage two: %s (size %s)",str,size) - end - -- name is now resolved and size is scaled cf sa/mo - local lookup, name, sub, method, detail = getspecification(str or "") - -- new (todo: inheritancemode) - local designsize = fontdesignsize ~= "" and fontdesignsize or classdesignsize or "" - local designname = designsizefilename(name,designsize,size) - if designname and designname ~= "" then - if trace_defining or trace_designsize then - report_defining("remapping name %a, specification %a, size %a, designsize %a",name,designsize,size,designname) - end - -- we don't catch detail here - local o_lookup, o_name, o_sub, o_method, o_detail = getspecification(designname) - if o_lookup and o_lookup ~= "" then lookup = o_lookup end - if o_method and o_method ~= "" then method = o_method end - if o_detail and o_detail ~= "" then detail = o_detail end - name = o_name - sub = o_sub - end - -- so far - -- some settings can have been overloaded - if lookup and lookup ~= "" then - specification.lookup = lookup - end - if relativeid and relativeid ~= "" then -- experimental hook - local id = tonumber(relativeid) or 0 - specification.relativeid = id > 0 and id + local function nice_cs(cs) + return (gsub(cs,".->", "")) end - -- - specification.name = name - specification.size = size - specification.sub = (sub and sub ~= "" and sub) or specification.sub - specification.mathsize = mathsize - specification.textsize = textsize - specification.goodies = goodies - specification.cs = cs - specification.global = global - specification.scalemode = scaledfontmode -- context specific - if detail and detail ~= "" then - specification.method = method or "*" - specification.detail = detail - elseif specification.detail and specification.detail ~= "" then - -- already set - elseif inheritancemode == 0 then - -- nothing - elseif inheritancemode == 1 then - -- fontonly - if fontfeatures and fontfeatures ~= "" then - specification.method = "*" - specification.detail = fontfeatures - end - if fontfallbacks and fontfallbacks ~= "" then - specification.fallbacks = fontfallbacks - end - elseif inheritancemode == 2 then - -- classonly - if classfeatures and classfeatures ~= "" then - specification.method = "*" - specification.detail = classfeatures - end - if classfallbacks and classfallbacks ~= "" then - specification.fallbacks = classfallbacks - end - elseif inheritancemode == 3 then - -- fontfirst - if fontfeatures and fontfeatures ~= "" then - specification.method = "*" - specification.detail = fontfeatures - elseif classfeatures and classfeatures ~= "" then - specification.method = "*" - specification.detail = classfeatures - end - if fontfallbacks and fontfallbacks ~= "" then - specification.fallbacks = fontfallbacks - elseif classfallbacks and classfallbacks ~= "" then - specification.fallbacks = classfallbacks - end - elseif inheritancemode == 4 then - -- classfirst - if classfeatures and classfeatures ~= "" then - specification.method = "*" - specification.detail = classfeatures - elseif fontfeatures and fontfeatures ~= "" then - specification.method = "*" - specification.detail = fontfeatures - end - if classfallbacks and classfallbacks ~= "" then - specification.fallbacks = classfallbacks - elseif fontfallbacks and fontfallbacks ~= "" then - specification.fallbacks = fontfallbacks - end - end - local tfmdata = definers.read(specification,size) -- id not yet known (size in spec?) - -- - local lastfontid = 0 - if not tfmdata then - report_defining("unable to define %a as %a",name,nice_cs(cs)) - lastfontid = -1 - letvaluerelax(cs) -- otherwise the current definition takes the previous one - elseif type(tfmdata) == "number" then + + function commands.definefont_two(global,cs,str,size,inheritancemode,classfeatures,fontfeatures,classfallbacks,fontfallbacks, + mathsize,textsize,relativeid,classgoodies,goodies,classdesignsize,fontdesignsize,scaledfontmode) if trace_defining then - report_defining("reusing %s, id %a, target %a, features %a / %a, fallbacks %a / %a, goodies %a / %a, designsize %a / %a", - name,tfmdata,nice_cs(cs),classfeatures,fontfeatures,classfallbacks,fontfallbacks,classgoodies,goodies,classdesignsize,fontdesignsize) + report_defining("start stage two: %s (size %s)",str,size) end - csnames[tfmdata] = specification.cs - texdefinefont(global,cs,tfmdata) - -- resolved (when designsize is used): - local size = fontdata[tfmdata].parameters.size or 0 - setsomefontsize(size .. "sp") - texsetcount("scaledfontsize",size) - lastfontid = tfmdata - else - -- setting the extra characters will move elsewhere - local characters = tfmdata.characters - local parameters = tfmdata.parameters - -- we use char0 as signal; cf the spec pdf can handle this (no char in slot) - characters[0] = nil - -- characters[0x00A0] = { width = parameters.space } - -- characters[0x2007] = { width = characters[0x0030] and characters[0x0030].width or parameters.space } -- figure - -- characters[0x2008] = { width = characters[0x002E] and characters[0x002E].width or parameters.space } -- period - -- - constructors.checkvirtualids(tfmdata) -- experiment, will become obsolete when slots can selfreference - local id = font.define(tfmdata) - csnames[id] = specification.cs - tfmdata.properties.id = id - definers.register(tfmdata,id) -- to be sure, normally already done - texdefinefont(global,cs,id) - constructors.cleanuptable(tfmdata) - constructors.finalize(tfmdata) - if trace_defining then - report_defining("defining %a, id %a, target %a, features %a / %a, fallbacks %a / %a", - name,id,nice_cs(cs),classfeatures,fontfeatures,classfallbacks,fontfallbacks) + -- name is now resolved and size is scaled cf sa/mo + local lookup, name, sub, method, detail = getspecification(str or "") + -- new (todo: inheritancemode) + local designsize = fontdesignsize ~= "" and fontdesignsize or classdesignsize or "" + local designname = designsizefilename(name,designsize,size) + if designname and designname ~= "" then + if trace_defining or trace_designsize then + report_defining("remapping name %a, specification %a, size %a, designsize %a",name,designsize,size,designname) + end + -- we don't catch detail here + local o_lookup, o_name, o_sub, o_method, o_detail = getspecification(designname) + if o_lookup and o_lookup ~= "" then lookup = o_lookup end + if o_method and o_method ~= "" then method = o_method end + if o_detail and o_detail ~= "" then detail = o_detail end + name = o_name + sub = o_sub end - -- resolved (when designsize is used): - local size = tfmdata.parameters.size or 655360 - setsomefontsize(size .. "sp") - texsetcount("scaledfontsize",size) - lastfontid = id - end - if trace_defining then - report_defining("memory usage after: %s",statistics.memused()) - report_defining("stop stage two") - end - -- - texsetcount("global","lastfontid",lastfontid) - if not mathsize then - -- forget about it - elseif mathsize == 0 then - lastmathids[1] = lastfontid - else - lastmathids[mathsize] = lastfontid - end - -- - statistics.stoptiming(fonts) -end - -function definers.define(specification) - -- - local name = specification.name - if not name or name == "" then - return -1 - else - statistics.starttiming(fonts) - -- - -- following calls expect a few properties to be set: - -- - local lookup, name, sub, method, detail = getspecification(name or "") - -- - specification.name = (name ~= "" and name) or specification.name - -- - specification.lookup = specification.lookup or (lookup ~= "" and lookup) or "file" - specification.size = specification.size or 655260 - specification.sub = specification.sub or (sub ~= "" and sub) or "" - specification.method = specification.method or (method ~= "" and method) or "*" - specification.detail = specification.detail or (detail ~= "" and detail) or "" - -- - if type(specification.size) == "string" then - specification.size = texsp(specification.size) or 655260 + -- so far + -- some settings can have been overloaded + if lookup and lookup ~= "" then + specification.lookup = lookup + end + if relativeid and relativeid ~= "" then -- experimental hook + local id = tonumber(relativeid) or 0 + specification.relativeid = id > 0 and id end -- - specification.specification = "" -- not used - specification.resolved = "" - specification.forced = "" - specification.features = { } -- via detail, maybe some day - -- - -- we don't care about mathsize textsize goodies fallbacks - -- - local cs = specification.cs - if cs == "" then - cs = nil - specification.cs = nil - specification.global = false - elseif specification.global == nil then - specification.global = false + specification.name = name + specification.size = size + specification.sub = (sub and sub ~= "" and sub) or specification.sub + specification.mathsize = mathsize + specification.textsize = textsize + specification.goodies = goodies + specification.cs = cs + specification.global = global + specification.scalemode = scaledfontmode -- context specific + if detail and detail ~= "" then + specification.method = method or "*" + specification.detail = detail + elseif specification.detail and specification.detail ~= "" then + -- already set + elseif inheritancemode == 0 then + -- nothing + elseif inheritancemode == 1 then + -- fontonly + if fontfeatures and fontfeatures ~= "" then + specification.method = "*" + specification.detail = fontfeatures + end + if fontfallbacks and fontfallbacks ~= "" then + specification.fallbacks = fontfallbacks + end + elseif inheritancemode == 2 then + -- classonly + if classfeatures and classfeatures ~= "" then + specification.method = "*" + specification.detail = classfeatures + end + if classfallbacks and classfallbacks ~= "" then + specification.fallbacks = classfallbacks + end + elseif inheritancemode == 3 then + -- fontfirst + if fontfeatures and fontfeatures ~= "" then + specification.method = "*" + specification.detail = fontfeatures + elseif classfeatures and classfeatures ~= "" then + specification.method = "*" + specification.detail = classfeatures + end + if fontfallbacks and fontfallbacks ~= "" then + specification.fallbacks = fontfallbacks + elseif classfallbacks and classfallbacks ~= "" then + specification.fallbacks = classfallbacks + end + elseif inheritancemode == 4 then + -- classfirst + if classfeatures and classfeatures ~= "" then + specification.method = "*" + specification.detail = classfeatures + elseif fontfeatures and fontfeatures ~= "" then + specification.method = "*" + specification.detail = fontfeatures + end + if classfallbacks and classfallbacks ~= "" then + specification.fallbacks = classfallbacks + elseif fontfallbacks and fontfallbacks ~= "" then + specification.fallbacks = fontfallbacks + end end + local tfmdata = definers.read(specification,size) -- id not yet known (size in spec?) -- - local tfmdata = definers.read(specification,specification.size) + local lastfontid = 0 if not tfmdata then - return -1, nil + report_defining("unable to define %a as %a",name,nice_cs(cs)) + lastfontid = -1 + ctx_letvaluerelax(cs) -- otherwise the current definition takes the previous one elseif type(tfmdata) == "number" then - if cs then - texdefinefont(specification.global,cs,tfmdata) - csnames[tfmdata] = cs + if trace_defining then + report_defining("reusing %s, id %a, target %a, features %a / %a, fallbacks %a / %a, goodies %a / %a, designsize %a / %a", + name,tfmdata,nice_cs(cs),classfeatures,fontfeatures,classfallbacks,fontfallbacks,classgoodies,goodies,classdesignsize,fontdesignsize) end - return tfmdata, fontdata[tfmdata] + csnames[tfmdata] = specification.cs + texdefinefont(global,cs,tfmdata) + -- resolved (when designsize is used): + local size = fontdata[tfmdata].parameters.size or 0 + ctx_setsomefontsize(size .. "sp") + texsetcount("scaledfontsize",size) + lastfontid = tfmdata else + -- setting the extra characters will move elsewhere + local characters = tfmdata.characters + local parameters = tfmdata.parameters + -- we use char0 as signal; cf the spec pdf can handle this (no char in slot) + characters[0] = nil + -- characters[0x00A0] = { width = parameters.space } + -- characters[0x2007] = { width = characters[0x0030] and characters[0x0030].width or parameters.space } -- figure + -- characters[0x2008] = { width = characters[0x002E] and characters[0x002E].width or parameters.space } -- period + -- constructors.checkvirtualids(tfmdata) -- experiment, will become obsolete when slots can selfreference local id = font.define(tfmdata) + csnames[id] = specification.cs tfmdata.properties.id = id - definers.register(tfmdata,id) - if cs then - texdefinefont(specification.global,cs,id) - csnames[id] = cs - end + definers.register(tfmdata,id) -- to be sure, normally already done + texdefinefont(global,cs,id) constructors.cleanuptable(tfmdata) constructors.finalize(tfmdata) - return id, tfmdata + if trace_defining then + report_defining("defining %a, id %a, target %a, features %a / %a, fallbacks %a / %a", + name,id,nice_cs(cs),classfeatures,fontfeatures,classfallbacks,fontfallbacks) + end + -- resolved (when designsize is used): + local size = tfmdata.parameters.size or 655360 + ctx_setsomefontsize(size .. "sp") + texsetcount("scaledfontsize",size) + lastfontid = id + end + if trace_defining then + report_defining("memory usage after: %s",statistics.memused()) + report_defining("stop stage two") end + -- + texsetcount("global","lastfontid",lastfontid) + if not mathsize then + -- forget about it + elseif mathsize == 0 then + lastmathids[1] = lastfontid + else + lastmathids[mathsize] = lastfontid + end + -- statistics.stoptiming(fonts) end + + function definers.define(specification) + -- + local name = specification.name + if not name or name == "" then + return -1 + else + statistics.starttiming(fonts) + -- + -- following calls expect a few properties to be set: + -- + local lookup, name, sub, method, detail = getspecification(name or "") + -- + specification.name = (name ~= "" and name) or specification.name + -- + specification.lookup = specification.lookup or (lookup ~= "" and lookup) or "file" + specification.size = specification.size or 655260 + specification.sub = specification.sub or (sub ~= "" and sub) or "" + specification.method = specification.method or (method ~= "" and method) or "*" + specification.detail = specification.detail or (detail ~= "" and detail) or "" + -- + if type(specification.size) == "string" then + specification.size = texsp(specification.size) or 655260 + end + -- + specification.specification = "" -- not used + specification.resolved = "" + specification.forced = "" + specification.features = { } -- via detail, maybe some day + -- + -- we don't care about mathsize textsize goodies fallbacks + -- + local cs = specification.cs + if cs == "" then + cs = nil + specification.cs = nil + specification.global = false + elseif specification.global == nil then + specification.global = false + end + -- + local tfmdata = definers.read(specification,specification.size) + if not tfmdata then + return -1, nil + elseif type(tfmdata) == "number" then + if cs then + texdefinefont(specification.global,cs,tfmdata) + csnames[tfmdata] = cs + end + return tfmdata, fontdata[tfmdata] + else + constructors.checkvirtualids(tfmdata) -- experiment, will become obsolete when slots can selfreference + local id = font.define(tfmdata) + tfmdata.properties.id = id + definers.register(tfmdata,id) + if cs then + texdefinefont(specification.global,cs,id) + csnames[id] = cs + end + constructors.cleanuptable(tfmdata) + constructors.finalize(tfmdata) + return id, tfmdata + end + statistics.stoptiming(fonts) + end + end + end -- local id, cs = fonts.definers.internal { } -- local id, cs = fonts.definers.internal { number = 2 } -- local id, cs = fonts.definers.internal { name = "dejavusans" } -local n = 0 - -function definers.internal(specification,cs) - specification = specification or { } - local name = specification.name - local size = specification.size and number.todimen(specification.size) or texgetdimen("bodyfontsize") - local number = tonumber(specification.number) - local id = nil - if number then - id = number - elseif name and name ~= "" then - local cs = cs or specification.cs - if not cs then - n = n + 1 -- beware ... there can be many and they are often used once - -- cs = formatters["internal font %s"](n) - cs = "internal font " .. n - else - specification.cs = cs +do + + local n = 0 + + function definers.internal(specification,cs) + specification = specification or { } + local name = specification.name + local size = specification.size and number.todimen(specification.size) or texgetdimen("bodyfontsize") + local number = tonumber(specification.number) + local id = nil + if number then + id = number + elseif name and name ~= "" then + local cs = cs or specification.cs + if not cs then + n = n + 1 -- beware ... there can be many and they are often used once + -- cs = formatters["internal font %s"](n) + cs = "internal font " .. n + else + specification.cs = cs + end + id = definers.define { + name = name, + size = size, + cs = cs, + } end - id = definers.define { - name = name, - size = size, - cs = cs, - } - end - if not id then - id = currentfont() + if not id then + id = currentfont() + end + return id, csnames[id] end - return id, csnames[id] + end local enable_auto_r_scale = false @@ -1519,6 +1540,11 @@ local Shapes = { mono = "Mono", } +local ctx_startfontclass = context.startfontclass +local ctx_stopfontclass = context.stopfontclass +local ctx_definefontsynonym = context.definefontsynonym +local ctx_dofastdefinetypeface = context.dofastdefinetypeface + function fonts.definetypeface(name,t) if type(name) == "table" then -- {name=abc,k=v,...} @@ -1546,14 +1572,14 @@ function fonts.definetypeface(name,t) local normalwidth = t.normalwidth or t.width or p.normalwidth or p.width or "normal" local boldwidth = t.boldwidth or t.width or p.boldwidth or p.width or "normal" Shape = Shapes[shape] or "Serif" - context.startfontclass { name } - context.definefontsynonym( { format("%s", Shape) }, { format("spec:%s-%s-regular-%s", fontname, normalweight, normalwidth) } ) - context.definefontsynonym( { format("%sBold", Shape) }, { format("spec:%s-%s-regular-%s", fontname, boldweight, boldwidth ) } ) - context.definefontsynonym( { format("%sBoldItalic", Shape) }, { format("spec:%s-%s-italic-%s", fontname, boldweight, boldwidth ) } ) - context.definefontsynonym( { format("%sItalic", Shape) }, { format("spec:%s-%s-italic-%s", fontname, normalweight, normalwidth) } ) - context.stopfontclass() + ctx_startfontclass { name } + ctx_definefontsynonym( { format("%s", Shape) }, { format("spec:%s-%s-regular-%s", fontname, normalweight, normalwidth) } ) + ctx_definefontsynonym( { format("%sBold", Shape) }, { format("spec:%s-%s-regular-%s", fontname, boldweight, boldwidth ) } ) + ctx_definefontsynonym( { format("%sBoldItalic", Shape) }, { format("spec:%s-%s-italic-%s", fontname, boldweight, boldwidth ) } ) + ctx_definefontsynonym( { format("%sItalic", Shape) }, { format("spec:%s-%s-italic-%s", fontname, normalweight, normalwidth) } ) + ctx_stopfontclass() local settings = sequenced({ features= t.features },",") - context.dofastdefinetypeface(name, shortcut, shape, size, settings) + ctx_dofastdefinetypeface(name, shortcut, shape, size, settings) end function fonts.current() -- todo: also handle name @@ -1566,10 +1592,15 @@ end -- interfaces +local context_char = context.char +local context_getvalue = context.getvalue + +local commands_doifelse = commands.doifelse + function commands.fontchar(n) n = nametoslot(n) if n then - context.char(n) + context_char(n) end end @@ -1579,7 +1610,7 @@ function commands.doifelsecurrentfonthasfeature(name) -- can be made faster with f = f and f.rawdata f = f and f.resources f = f and f.features - commands.doifelse(f and (f.gpos[name] or f.gsub[name])) + commands_doifelse(f and (f.gpos[name] or f.gsub[name])) end local p, f = 1, formatters["%0.1fpt"] -- normally this value is changed only once @@ -1751,14 +1782,15 @@ end -- redefinition -local quads = hashes.quads -local xheights = hashes.xheights +-- local hashes = fonts.hashes +-- local emwidths = hashes.emwidths +-- local exheights = hashes.exheights setmetatableindex(dimenfactors, function(t,k) if k == "ex" then - return 1/xheights[currentfont()] + return 1/exheights[currentfont()] elseif k == "em" then - return 1/quads[currentfont()] + return 1/emwidths[currentfont()] elseif k == "pct" or k == "%" then return 1/(texget("hsize")/100) else @@ -1835,7 +1867,7 @@ end -- end function commands.setfontofid(id) - context.getvalue(csnames[id]) + context_getvalue(csnames[id]) end -- more interfacing: @@ -1911,7 +1943,6 @@ end -- a fontkern plug: - local copy_node = nuts.copy local kern = nuts.pool.register(nuts.pool.kern()) @@ -1967,7 +1998,7 @@ local function markstates(head) head = tonut(head) local model = getattr(head,a_colormodel) or 1 for glyph in traverse_by_id(glyph_code,head) do - local a = getattr(glyph,a_state) + local a = getprop(glyph,a_state) if a then local name = names[a] if name then @@ -2023,3 +2054,12 @@ function methods.nocolor(head,font,attr) end return head, true end + +function commands.purefontname(name) + if type(name) == "number" then + name = fonts.helpers.name(name) + end + if type(name) == "string" then + context(file.basename(name)) + end +end diff --git a/tex/context/base/font-fea.mkvi b/tex/context/base/font-fea.mkvi index 8d985b411..35a9a642a 100644 --- a/tex/context/base/font-fea.mkvi +++ b/tex/context/base/font-fea.mkvi @@ -143,11 +143,11 @@ % hashing at this end is slower -\unexpanded\def\addfeature {\doifnextoptionalelse\font_feature_add_yes \font_feature_add_nop } -\unexpanded\def\subtractfeature {\doifnextoptionalelse\font_feature_subtract_yes \font_feature_subtract_nop } -\unexpanded\def\replacefeature {\doifnextoptionalelse\font_feature_replace_yes \font_feature_replace_nop } -\unexpanded\def\resetandaddfeature{\doifnextoptionalelse\font_feature_reset_add_yes\font_feature_reset_add_nop} -\unexpanded\def\feature {\doifnextoptionalelse\font_feature_yes \font_feature_nop } +\unexpanded\def\addfeature {\doifnextoptionalcselse\font_feature_add_yes \font_feature_add_nop } +\unexpanded\def\subtractfeature {\doifnextoptionalcselse\font_feature_subtract_yes \font_feature_subtract_nop } +\unexpanded\def\replacefeature {\doifnextoptionalcselse\font_feature_replace_yes \font_feature_replace_nop } +\unexpanded\def\resetandaddfeature{\doifnextoptionalcselse\font_feature_reset_add_yes\font_feature_reset_add_nop} +\unexpanded\def\feature {\doifnextoptionalcselse\font_feature_yes \font_feature_nop } \unexpanded\def\font_feature_add_yes [#feature]{\edef\m_font_feature_asked{#feature}\font_feature_add} \unexpanded\def\font_feature_add_nop #feature{\edef\m_font_feature_asked{#feature}\font_feature_add} diff --git a/tex/context/base/font-fil.mkvi b/tex/context/base/font-fil.mkvi index dcb298619..158bcda71 100644 --- a/tex/context/base/font-fil.mkvi +++ b/tex/context/base/font-fil.mkvi @@ -89,11 +89,11 @@ \def\font_basics_define_font_synonym_nop {\expandafter\let\csname\??fontfile\m_font_name\endcsname\m_font_file - \doifnextoptionalelse\font_basics_define_font_synonym_nop_opt\font_basics_define_font_synonym_nop_nil} + \doifnextoptionalcselse\font_basics_define_font_synonym_nop_opt\font_basics_define_font_synonym_nop_nil} \def\font_basics_define_font_synonym_yes {\expandafter\let\csname\??fontfile\fontclass\m_font_name\endcsname\m_font_file - \doifnextoptionalelse\font_basics_define_font_synonym_yes_opt\font_basics_define_font_synonym_yes_nil} + \doifnextoptionalcselse\font_basics_define_font_synonym_yes_opt\font_basics_define_font_synonym_yes_nil} \def\font_basics_define_font_synonym_nop_opt[#specification]% {\let\p_features \undefined diff --git a/tex/context/base/font-gds.lua b/tex/context/base/font-gds.lua index e57f784a0..9e7cb841e 100644 --- a/tex/context/base/font-gds.lua +++ b/tex/context/base/font-gds.lua @@ -278,7 +278,7 @@ local function setcolorscheme(tfmdata,scheme) end elseif type(name) == "number" then reverse[name] = i - elseif find(name,":") then + elseif find(name,":",1,true) then local start, stop = splitup(name,":") start = tonumber(start) stop = tonumber(stop) diff --git a/tex/context/base/font-ini.mkvi b/tex/context/base/font-ini.mkvi index 521901e05..c1e6d9390 100644 --- a/tex/context/base/font-ini.mkvi +++ b/tex/context/base/font-ini.mkvi @@ -363,7 +363,7 @@ \let\thedefinedfont\relax % not to be confused with \everydefinefont \unexpanded\def\definedfont - {\doifnextoptionalelse\font_basics_defined_font_yes\font_basics_defined_font_nop} + {\doifnextoptionalcselse\font_basics_defined_font_yes\font_basics_defined_font_nop} \def\font_basics_defined_font_yes[#specification]% {\c_font_feature_inheritance_mode\c_font_feature_inheritance_fontonly @@ -2082,7 +2082,7 @@ % \newtoks \everyswitchtobodyfont \unexpanded\def\setupbodyfont - {\doifnextoptionalelse\font_basics_setupbodyfont_yes\font_basics_setupbodyfont_nop} + {\doifnextoptionalcselse\font_basics_setupbodyfont_yes\font_basics_setupbodyfont_nop} \def\font_basics_setupbodyfont_nop {\restoreglobalbodyfont @@ -2175,7 +2175,8 @@ %D The next auxilliary macro is an alternative to \type %D {\fontname}. -\def\purefontname#font{\ctxlua{file.basename("\fontname#font"}} % will be function using id +\def\purefontname#font{\ctxcommand{purefontname("\fontname#font")}} +%def\purefontname#font{\ctxcommand{purefontname(\number\fontid#font)}} %D \macros %D {switchstyleonly} @@ -2190,7 +2191,7 @@ %D \stoptyping \unexpanded\def\switchstyleonly - {\doifnextoptionalelse\font_basics_switch_style_only_opt\font_basics_switch_style_only_arg} + {\doifnextoptionalcselse\font_basics_switch_style_only_opt\font_basics_switch_style_only_arg} \def\font_basics_switch_style_only_arg#name% stupid version {\font_helpers_set_current_font_style{\csname\??fontshortstyle\checkedstrippedcsname#name\endcsname}% @@ -2348,6 +2349,83 @@ \def\saveddefinedfontid {\number\fontid\font} \def\saveddefinedfontname{\fontname\font} +% yes or no: +% \let\font_basics_check_text_bodyfont_slow\font_basics_check_text_bodyfont +% +% \unexpanded\def\font_basics_check_text_bodyfont +% {\ifproductionrun +% % not per se \s!..'s +% \glet\font_basics_check_text_bodyfont \font_basics_check_text_bodyfont_slow +% \glet\font_basics_check_text_bodyfont_fast\relax +% \expandafter\font_basics_check_text_bodyfont +% \else +% \expandafter\font_basics_check_text_bodyfont_fast +% \fi} +% +% \def\font_basics_check_text_bodyfont_fast#style#alternative#size% size can be empty (checking needed as \bf is already defined) +% {\setugvalue{#style#size}% \rma +% {\let\fontstyle#style% +% \let\fontsize #size% +% \font_helpers_check_big_math_synchronization % double? better in everymath? +% \font_helpers_synchronize_font}% +% \setugvalue{#alternative#size}% \sla +% {\let\fontalternative#alternative% +% \let\fontsize #size% +% \font_helpers_check_big_math_synchronization % double? better in everymath? +% \font_helpers_synchronize_font}% +% \setugvalue{#style#alternative#size}% \rmsla +% {\let\fontstyle #style% +% \let\fontalternative#alternative% +% \let\fontsize #size% +% \font_helpers_check_big_math_synchronization % double? better in everymath? +% \font_helpers_synchronize_font}% +% \ifcsname\s!normal#style\endcsname % text/math check +% \expandafter\let\csname#style\expandafter\endcsname\csname\s!normal#style\endcsname +% \else +% \setugvalue{#style}% \rm +% {\let\fontstyle#style% +% \font_typescripts_inherit_check\fontstyle +% \ifmmode\mr\fi % otherwise \rm not downward compatible ... not adapted yet +% \font_helpers_synchronize_font}% +% \fi +% \ifcsname\s!normal#alternative\endcsname % text/math check +% \expandafter\let\csname#alternative\expandafter\endcsname\csname\s!normal#alternative\endcsname +% \else +% \setugvalue{#alternative}% \sl +% {\let\fontalternative#alternative% +% \font_helpers_synchronize_font}% +% \fi +% \setugvalue{#style\s!x}% \rmx +% {\csname#style\endcsname\tx}% +% \setugvalue{#style\s!xx}% \rmxx +% {\csname#style\endcsname\txx}% +% \setugvalue{#alternative\s!x}% \slx +% {\font_helpers_check_nested_x_fontsize +% \ifmmode +% \scriptstyle +% \else +% \let\fontface\!!plusfour +% \let\fontalternative#alternative% +% \font_helpers_synchronize_font +% \fi +% \currentxfontsize\plusone +% \let\tx\txx}% +% \setugvalue{#alternative\s!xx}% \slxx +% {\font_helpers_check_nested_x_fontsize +% \ifmmode +% \scriptscriptstyle +% \else +% \let\fontface\!!plusfive +% \let\fontalternative#alternative% +% \font_helpers_synchronize_font +% \fi +% \currentxfontsize\plustwo +% \let\tx\empty +% \let\txx\empty}% +% \setugvalue{#style#alternative}% \rmsl +% {\let\fontstyle #style% +% \let\fontalternative#alternative% +% \font_helpers_synchronize_font}} \protect \endinput diff --git a/tex/context/base/font-lib.mkvi b/tex/context/base/font-lib.mkvi index a664d9b3a..bfd85245c 100644 --- a/tex/context/base/font-lib.mkvi +++ b/tex/context/base/font-lib.mkvi @@ -38,7 +38,11 @@ \registerctxluafile{font-ott}{1.001} % otf tables (first) \registerctxluafile{font-otf}{1.001} % otf main \registerctxluafile{font-otb}{1.001} % otf main base -\registerctxluafile{node-inj}{1.001} % we might split it off + +\doiffileelse{font-inj.lua} + {\registerctxluafile{font-inj}{1.001}} % new method (for the moment only local) + {\registerctxluafile{node-inj}{1.001}} % old method + %registerctxluafile{font-ota}{1.001} % otf analyzers \registerctxluafile{font-otx}{1.001} % otf analyzers \registerctxluafile{font-otn}{1.001} % otf main node diff --git a/tex/context/base/font-mis.lua b/tex/context/base/font-mis.lua index 63cae37f3..7385d6f31 100644 --- a/tex/context/base/font-mis.lua +++ b/tex/context/base/font-mis.lua @@ -22,7 +22,7 @@ local handlers = fonts.handlers handlers.otf = handlers.otf or { } local otf = handlers.otf -otf.version = otf.version or 2.751 +otf.version = otf.version or 2.755 otf.cache = otf.cache or containers.define("fonts", "otf", otf.version, true) function otf.loadcached(filename,format,sub) diff --git a/tex/context/base/font-odv.lua b/tex/context/base/font-odv.lua index d07c38d9a..079027ffe 100644 --- a/tex/context/base/font-odv.lua +++ b/tex/context/base/font-odv.lua @@ -118,8 +118,8 @@ local getfont = nuts.getfont local getsubtype = nuts.getsubtype local getfield = nuts.getfield local setfield = nuts.setfield -local getattr = nuts.getattr -local setattr = nuts.setattr +local getprop = nuts.getprop +local setprop = nuts.setprop local insert_node_after = nuts.insert_after local copy_node = nuts.copy @@ -649,7 +649,7 @@ local function deva_reorder(head,start,stop,font,attr,nbspaces) current = start else current = getnext(n) - setattr(start,a_state,s_rphf) + setprop(start,a_state,s_rphf) end end @@ -682,9 +682,9 @@ local function deva_reorder(head,start,stop,font,attr,nbspaces) local nextcurrent = copy_node(current) setfield(tempcurrent,"next",nextcurrent) setfield(nextcurrent,"prev",tempcurrent) - setattr(tempcurrent,a_state,s_blwf) + setprop(tempcurrent,a_state,s_blwf) tempcurrent = processcharacters(tempcurrent,font) - setattr(tempcurrent,a_state,unsetvalue) + setprop(tempcurrent,a_state,unsetvalue) if getchar(next) == getchar(tempcurrent) then flush_list(tempcurrent) local n = copy_node(current) @@ -713,7 +713,7 @@ local function deva_reorder(head,start,stop,font,attr,nbspaces) while not basefound do -- find base consonant if consonant[getchar(current)] then - setattr(current,a_state,s_half) + setprop(current,a_state,s_half) if not firstcons then firstcons = current end @@ -722,7 +722,7 @@ local function deva_reorder(head,start,stop,font,attr,nbspaces) base = current elseif blwfcache[getchar(current)] then -- consonant has below-base (or post-base) form - setattr(current,a_state,s_blwf) + setprop(current,a_state,s_blwf) else base = current end @@ -802,15 +802,15 @@ local function deva_reorder(head,start,stop,font,attr,nbspaces) while current ~= stop do local next = getnext(current) if next ~= stop and halant[getchar(next)] and getchar(getnext(next)) == c_zwnj then - setattr(current,a_state,unsetvalue) + setprop(current,a_state,unsetvalue) end current = next end - if base ~= stop and getattr(base,a_state) then + if base ~= stop and getprop(base,a_state) then local next = getnext(base) if halant[getchar(next)] and not (next ~= stop and getchar(getnext(next)) == c_zwj) then - setattr(base,a_state,unsetvalue) + setprop(base,a_state,unsetvalue) end end @@ -906,7 +906,7 @@ local function deva_reorder(head,start,stop,font,attr,nbspaces) end bn = next end - if getattr(current,a_state) == s_rphf then + if getprop(current,a_state) == s_rphf then -- position Reph (Ra + H) after post-base 'matra' (if any) since these -- become marks on the 'matra', not on the base glyph if b ~= current then @@ -998,12 +998,12 @@ end function handlers.devanagari_reorder_matras(head,start,kind,lookupname,replacement) -- no leak local current = start -- we could cache attributes here local startfont = getfont(start) - local startattr = getattr(start,a_syllabe) + local startattr = getprop(start,a_syllabe) -- can be fast loop - while current and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == font and getattr(current,a_syllabe) == startattr do + while current and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == font and getprop(current,a_syllabe) == startattr do local next = getnext(current) - if halant[getchar(current)] and not getattr(current,a_state) then - if next and getid(next) == glyph_code and getsubtype(next) < 256 and getfont(next) == font and getattr(next,a_syllabe) == startattr and zw_char[getchar(next)] then + if halant[getchar(current)] and not getprop(current,a_state) then + if next and getid(next) == glyph_code and getsubtype(next) < 256 and getfont(next) == font and getprop(next,a_syllabe) == startattr and zw_char[getchar(next)] then current = next end local startnext = getnext(start) @@ -1054,11 +1054,11 @@ function handlers.devanagari_reorder_reph(head,start,kind,lookupname,replacement local startnext = nil local startprev = nil local startfont = getfont(start) - local startattr = getattr(start,a_syllabe) - while current and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == startfont and getattr(current,a_syllabe) == startattr do --step 2 - if halant[getchar(current)] and not getattr(current,a_state) then + local startattr = getprop(start,a_syllabe) + while current and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == startfont and getprop(current,a_syllabe) == startattr do --step 2 + if halant[getchar(current)] and not getprop(current,a_state) then local next = getnext(current) - if next and getid(next) == glyph_code and getsubtype(next) < 256 and getfont(next) == startfont and getattr(next,a_syllabe) == startattr and zw_char[getchar(next)] then + if next and getid(next) == glyph_code and getsubtype(next) < 256 and getfont(next) == startfont and getprop(next,a_syllabe) == startattr and zw_char[getchar(next)] then current = next end startnext = getnext(start) @@ -1071,15 +1071,15 @@ function handlers.devanagari_reorder_reph(head,start,kind,lookupname,replacement setfield(current,"next",start) setfield(start,"prev",current) start = startnext - startattr = getattr(start,a_syllabe) + startattr = getprop(start,a_syllabe) break end current = getnext(current) end if not startnext then current = getnext(start) - while current and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == startfont and getattr(current,a_syllabe) == startattr do --step 4 - if getattr(current,a_state) == s_pstf then --post-base + while current and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == startfont and getprop(current,a_syllabe) == startattr do --step 4 + if getprop(current,a_state) == s_pstf then --post-base startnext = getnext(start) head = remove_node(head,start) local prev = getprev(current) @@ -1088,7 +1088,7 @@ function handlers.devanagari_reorder_reph(head,start,kind,lookupname,replacement setfield(start,"next",current) setfield(current,"prev",start) start = startnext - startattr = getattr(start,a_syllabe) + startattr = getprop(start,a_syllabe) break end current = getnext(current) @@ -1100,7 +1100,7 @@ function handlers.devanagari_reorder_reph(head,start,kind,lookupname,replacement if not startnext then current = getnext(start) local c = nil - while current and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == startfont and getattr(current,a_syllabe) == startattr do --step 5 + while current and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == startfont and getprop(current,a_syllabe) == startattr do --step 5 if not c then local char = getchar(current) -- todo: combine in one @@ -1121,14 +1121,14 @@ function handlers.devanagari_reorder_reph(head,start,kind,lookupname,replacement setfield(c,"prev",start) -- end start = startnext - startattr = getattr(start,a_syllabe) + startattr = getprop(start,a_syllabe) end end -- leaks if not startnext then current = start local next = getnext(current) - while next and getid(next) == glyph_code and getsubtype(next) < 256 and getfont(next) == startfont and getattr(next,a_syllabe) == startattr do --step 6 + while next and getid(next) == glyph_code and getsubtype(next) < 256 and getfont(next) == startfont and getprop(next,a_syllabe) == startattr do --step 6 current = next next = getnext(current) end @@ -1165,12 +1165,12 @@ function handlers.devanagari_reorder_pre_base_reordering_consonants(head,start,k local startnext = nil local startprev = nil local startfont = getfont(start) - local startattr = getattr(start,a_syllabe) + local startattr = getprop(start,a_syllabe) -- can be fast for loop + caching state - while current and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == startfont and getattr(current,a_syllabe) == startattr do + while current and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == startfont and getprop(current,a_syllabe) == startattr do local next = getnext(current) - if halant[getchar(current)] and not getattr(current,a_state) then - if next and getid(next) == glyph_code and getsubtype(next) < 256 and getfont(next) == font and getattr(next,a_syllabe) == startattr then + if halant[getchar(current)] and not getprop(current,a_state) then + if next and getid(next) == glyph_code and getsubtype(next) < 256 and getfont(next) == font and getprop(next,a_syllabe) == startattr then local char = getchar(next) if char == c_zwnj or char == c_zwj then current = next @@ -1192,9 +1192,9 @@ function handlers.devanagari_reorder_pre_base_reordering_consonants(head,start,k end if not startnext then current = getnext(start) - startattr = getattr(start,a_syllabe) - while current and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == startfont and getattr(current,a_syllabe) == startattr do - if not consonant[getchar(current)] and getattr(current,a_state) then --main + startattr = getprop(start,a_syllabe) + while current and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == startfont and getprop(current,a_syllabe) == startattr do + if not consonant[getchar(current)] and getprop(current,a_state) then --main startnext = getnext(start) removenode(start,start) local prev = getprev(current) @@ -1377,7 +1377,7 @@ local function dev2_reorder(head,start,stop,font,attr,nbspaces) -- maybe do a pa current = next current = getnext(current) elseif current == start then - setattr(current,a_state,s_rphf) + setprop(current,a_state,s_rphf) current = next else current = next @@ -1418,8 +1418,8 @@ local function dev2_reorder(head,start,stop,font,attr,nbspaces) -- maybe do a pa local next = getnext(current) local n = locl[next] or getchar(next) if found[n] then - setattr(current,a_state,s_pref) - setattr(next,a_state,s_pref) + setprop(current,a_state,s_pref) + setprop(next,a_state,s_pref) current = next end end @@ -1440,7 +1440,7 @@ local function dev2_reorder(head,start,stop,font,attr,nbspaces) -- maybe do a pa if next ~= stop and getchar(getnext(next)) == c_zwnj then -- zwnj prevent creation of half current = next else - setattr(current,a_state,s_half) + setprop(current,a_state,s_half) if not halfpos then halfpos = current end @@ -1462,8 +1462,8 @@ local function dev2_reorder(head,start,stop,font,attr,nbspaces) -- maybe do a pa local next = getnext(current) local n = locl[next] or getchar(next) if found[n] then - setattr(current,a_state,s_blwf) - setattr(next,a_state,s_blwf) + setprop(current,a_state,s_blwf) + setprop(next,a_state,s_blwf) current = next subpos = current end @@ -1482,8 +1482,8 @@ local function dev2_reorder(head,start,stop,font,attr,nbspaces) -- maybe do a pa local next = getnext(current) local n = locl[next] or getchar(next) if found[n] then - setattr(current,a_state,s_pstf) - setattr(next,a_state,s_pstf) + setprop(current,a_state,s_pstf) + setprop(next,a_state,s_pstf) current = next postpos = current end @@ -1501,7 +1501,7 @@ local function dev2_reorder(head,start,stop,font,attr,nbspaces) -- maybe do a pa local current, base, firstcons = start, nil, nil - if getattr(start,a_state) == s_rphf then + if getprop(start,a_state) == s_rphf then -- if syllable starts with Ra + H and script has 'Reph' then exclude Reph from candidates for base consonants current = getnext(getnext(start)) end @@ -1532,13 +1532,13 @@ local function dev2_reorder(head,start,stop,font,attr,nbspaces) -- maybe do a pa local tmp = getnext(next) local changestop = next == stop setfield(next,"next",nil) - setattr(current,a_state,s_pref) + setprop(current,a_state,s_pref) current = processcharacters(current,font) - setattr(current,a_state,s_blwf) + setprop(current,a_state,s_blwf) current = processcharacters(current,font) - setattr(current,a_state,s_pstf) + setprop(current,a_state,s_pstf) current = processcharacters(current,font) - setattr(current,a_state,unsetvalue) + setprop(current,a_state,unsetvalue) if halant[getchar(current)] then setfield(getnext(current),"next",tmp) local nc = copy_node(current) @@ -1572,7 +1572,7 @@ local function dev2_reorder(head,start,stop,font,attr,nbspaces) -- maybe do a pa firstcons = current end -- check whether consonant has below-base or post-base form or is pre-base reordering Ra - local a = getattr(current,a_state) + local a = getprop(current,a_state) if not (a == s_pref or a == s_blwf or a == s_pstf) then base = current end @@ -1586,13 +1586,13 @@ local function dev2_reorder(head,start,stop,font,attr,nbspaces) -- maybe do a pa end if not base then - if getattr(start,a_state) == s_rphf then - setattr(start,a_state,unsetvalue) + if getprop(start,a_state) == s_rphf then + setprop(start,a_state,unsetvalue) end return head, stop, nbspaces else - if getattr(base,a_state) then - setattr(base,a_state,unsetvalue) + if getprop(base,a_state) then + setprop(base,a_state,unsetvalue) end basepos = base end @@ -2319,14 +2319,14 @@ function methods.dev2(head,font,attr) local c = syllablestart local n = getnext(syllableend) while c ~= n do - setattr(c,a_syllabe,syllabe) + setprop(c,a_syllabe,syllabe) c = getnext(c) end end if syllableend and syllablestart ~= syllableend then head, current, nbspaces = dev2_reorder(head,syllablestart,syllableend,font,attr,nbspaces) end - if not syllableend and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == font and not getattr(current,a_state) then + if not syllableend and getid(current) == glyph_code and getsubtype(current) < 256 and getfont(current) == font and not getprop(current,a_state) then local mark = mark_four[getchar(current)] if mark then head, current = inject_syntax_error(head,current,mark) diff --git a/tex/context/base/font-ota.lua b/tex/context/base/font-ota.lua index 9af5a3347..8d60ddc0e 100644 --- a/tex/context/base/font-ota.lua +++ b/tex/context/base/font-ota.lua @@ -8,6 +8,9 @@ if not modules then modules = { } end modules ['font-ota'] = { -- this might become scrp-*.lua +-- [attr] : getprop or getattr +-- [attr] : setprop or setattr + local type = type if not trackers then trackers = { register = function() end } end diff --git a/tex/context/base/font-otb.lua b/tex/context/base/font-otb.lua index 2a7b821ea..946d552e4 100644 --- a/tex/context/base/font-otb.lua +++ b/tex/context/base/font-otb.lua @@ -594,8 +594,9 @@ basemethod = "independent" local function featuresinitializer(tfmdata,value) if true then -- value then - local t = trace_preparing and os.clock() - local features = tfmdata.shared.features + local starttime = trace_preparing and os.clock() + local features = tfmdata.shared.features + local fullname = trace_preparing and tfmdata.properties.fullname if features then applybasemethod("initializehashes",tfmdata) local collectlookups = otf.collectlookups @@ -605,34 +606,70 @@ local function featuresinitializer(tfmdata,value) local language = properties.language local basesubstitutions = rawdata.resources.features.gsub local basepositionings = rawdata.resources.features.gpos - if basesubstitutions then - for feature, data in next, basesubstitutions do - local value = features[feature] - if value then - local validlookups, lookuplist = collectlookups(rawdata,feature,script,language) - if validlookups then - applybasemethod("preparesubstitutions",tfmdata,feature,value,validlookups,lookuplist) - registerbasefeature(feature,value) - end - end - end - end - if basepositionings then - for feature, data in next, basepositionings do - local value = features[feature] - if value then - local validlookups, lookuplist = collectlookups(rawdata,feature,script,language) - if validlookups then - applybasemethod("preparepositionings",tfmdata,feature,features[feature],validlookups,lookuplist) - registerbasefeature(feature,value) + -- + -- if basesubstitutions then + -- for feature, data in next, basesubstitutions do + -- local value = features[feature] + -- if value then + -- local validlookups, lookuplist = collectlookups(rawdata,feature,script,language) + -- if validlookups then + -- applybasemethod("preparesubstitutions",tfmdata,feature,value,validlookups,lookuplist) + -- registerbasefeature(feature,value) + -- end + -- end + -- end + -- end + -- if basepositionings then + -- for feature, data in next, basepositionings do + -- local value = features[feature] + -- if value then + -- local validlookups, lookuplist = collectlookups(rawdata,feature,script,language) + -- if validlookups then + -- applybasemethod("preparepositionings",tfmdata,feature,features[feature],validlookups,lookuplist) + -- registerbasefeature(feature,value) + -- end + -- end + -- end + -- end + -- + if basesubstitutions or basepositionings then + local sequences = tfmdata.resources.sequences + for s=1,#sequences do + local sequence = sequences[s] + local sfeatures = sequence.features + if sfeatures then + local order = sequence.order + if order then + for i=1,#order do -- + local feature = order[i] + if features[feature] then + local validlookups, lookuplist = collectlookups(rawdata,feature,script,language) + if not validlookups then + -- skip + elseif basesubstitutions and basesubstitutions[feature] then + if trace_preparing then + report_prepare("filtering base feature %a for %a",feature,fullname) + end + applybasemethod("preparesubstitutions",tfmdata,feature,value,validlookups,lookuplist) + registerbasefeature(feature,value) + elseif basepositionings and basepositionings[feature] then + if trace_preparing then + report_prepare("filtering base feature %a for %a",feature,fullname) + end + applybasemethod("preparepositionings",tfmdata,feature,features[feature],validlookups,lookuplist) + registerbasefeature(feature,value) + end + end + end end end end end + -- registerbasehash(tfmdata) end if trace_preparing then - report_prepare("preparation time is %0.3f seconds for %a",os.clock()-t,tfmdata.properties.fullname) + report_prepare("preparation time is %0.3f seconds for %a",os.clock()-starttime,fullname) end end end diff --git a/tex/context/base/font-otc.lua b/tex/context/base/font-otc.lua index 3006e47ca..92775270d 100644 --- a/tex/context/base/font-otc.lua +++ b/tex/context/base/font-otc.lua @@ -70,6 +70,7 @@ local function addfeature(data,feature,specifications) local subtables = specification.subtables or { specification.data } or { } local featuretype = types[specification.type or "substitution"] local featureflags = specification.flags or noflags + local featureorder = specification.order or { feature } local added = false local featurename = format("ctx_%s_%s",feature,s) local st = { } @@ -138,6 +139,7 @@ local function addfeature(data,feature,specifications) features = { [feature] = askedfeatures }, flags = featureflags, name = featurename, + order = featureorder, subtables = st, type = featuretype, } @@ -204,6 +206,7 @@ local tlig_specification = { type = "ligature", features = everywhere, data = tlig, + order = { "tlig" }, flags = noflags, } @@ -226,6 +229,7 @@ local trep_specification = { type = "substitution", features = everywhere, data = trep, + order = { "trep" }, flags = noflags, } @@ -256,6 +260,7 @@ if characters.combined then type = "ligature", features = everywhere, data = tcom, + order = { "tcom" }, flags = noflags, initialize = initialize, } @@ -314,6 +319,7 @@ local anum_specification = { { type = "substitution", features = { arab = { urd = true, dflt = true } }, + order = { "anum" }, data = anum_arabic, flags = noflags, -- { }, valid = valid, @@ -321,6 +327,7 @@ local anum_specification = { { type = "substitution", features = { arab = { urd = true } }, + order = { "anum" }, data = anum_persian, flags = noflags, -- { }, valid = valid, diff --git a/tex/context/base/font-otd.lua b/tex/context/base/font-otd.lua index 919da2379..2dd23b741 100644 --- a/tex/context/base/font-otd.lua +++ b/tex/context/base/font-otd.lua @@ -129,59 +129,66 @@ local default = "dflt" -- what about analyze in local and not in font -local function initialize(sequence,script,language,s_enabled,a_enabled,font,attr,dynamic) +local function initialize(sequence,script,language,s_enabled,a_enabled,font,attr,dynamic,ra) local features = sequence.features if features then - for kind, scripts in next, features do - local e_e - local a_e = a_enabled and a_enabled[kind] -- the value (location) - if a_e ~= nil then - e_e = a_e - else - e_e = s_enabled and s_enabled[kind] -- the value (font) - end - if e_e then - local languages = scripts[script] or scripts[wildcard] - if languages then - -- local valid, what = false - local valid = false - -- not languages[language] or languages[default] or languages[wildcard] because we want tracing - -- only first attribute match check, so we assume simple fina's - -- default can become a font feature itself - if languages[language] then - valid = e_e -- was true - -- what = language - -- elseif languages[default] then - -- valid = true - -- what = default - elseif languages[wildcard] then - valid = e_e -- was true - -- what = wildcard - end - if valid then - local attribute = autofeatures[kind] or false - -- if a_e and dynamic < 0 then - -- valid = false - -- end - -- if trace_applied then - -- local typ, action = match(sequence.type,"(.*)_(.*)") -- brrr - -- report_process( - -- "%s font: %03i, dynamic: %03i, kind: %s, script: %-4s, language: %-4s (%-4s), type: %s, action: %s, name: %s", - -- (valid and "+") or "-",font,attr or 0,kind,script,language,what,typ,action,sequence.name) - -- end - if trace_applied then - report_process( - "font %s, dynamic %a (%a), feature %a, script %a, language %a, lookup %a, value %a", - font,attr or 0,dynamic,kind,script,language,sequence.name,valid) + local order = sequence.order + if order then + for i=1,#order do -- + local kind = order[i] -- + local e_e + local a_e = a_enabled and a_enabled[kind] -- the value (location) + if a_e ~= nil then + e_e = a_e + else + e_e = s_enabled and s_enabled[kind] -- the value (font) + end + if e_e then + local scripts = features[kind] -- + local languages = scripts[script] or scripts[wildcard] + if languages then + -- local valid, what = false + local valid = false + -- not languages[language] or languages[default] or languages[wildcard] because we want tracing + -- only first attribute match check, so we assume simple fina's + -- default can become a font feature itself + if languages[language] then + valid = e_e -- was true + -- what = language + -- elseif languages[default] then + -- valid = true + -- what = default + elseif languages[wildcard] then + valid = e_e -- was true + -- what = wildcard + end + if valid then + local attribute = autofeatures[kind] or false + -- if a_e and dynamic < 0 then + -- valid = false + -- end + -- if trace_applied then + -- local typ, action = match(sequence.type,"(.*)_(.*)") -- brrr + -- report_process( + -- "%s font: %03i, dynamic: %03i, kind: %s, script: %-4s, language: %-4s (%-4s), type: %s, action: %s, name: %s", + -- (valid and "+") or "-",font,attr or 0,kind,script,language,what,typ,action,sequence.name) + -- end + if trace_applied then + report_process( + "font %s, dynamic %a (%a), feature %a, script %a, language %a, lookup %a, value %a", + font,attr or 0,dynamic,kind,script,language,sequence.name,valid) + end + ra[#ra+1] = { valid, attribute, sequence.chain or 0, kind, sequence } end - return { valid, attribute, sequence.chain or 0, kind, sequence } end end end + -- { valid, attribute, chain, "generic", sequence } -- false anyway, could be flag instead of table + else + -- can't happen end - return false -- { valid, attribute, chain, "generic", sequence } -- false anyway, could be flag instead of table else - return false -- { false, false, chain, false, sequence } -- indirect lookup, part of chain (todo: make this a separate table) + -- { false, false, chain, false, sequence } -- indirect lookup, part of chain (todo: make this a separate table) end end @@ -249,12 +256,16 @@ function otf.dataset(tfmdata,font,attr) -- attr only when explicit (as in specia -- return v -- end -- end) +-- for s=1,#sequences do +-- local v = initialize(sequences[s],script,language,s_enabled,a_enabled,font,attr,dynamic) +-- if v then +-- ra[#ra+1] = v +-- end +-- end for s=1,#sequences do - local v = initialize(sequences[s],script,language,s_enabled,a_enabled,font,attr,dynamic) - if v then - ra[#ra+1] = v - end + initialize(sequences[s],script,language,s_enabled,a_enabled,font,attr,dynamic,ra) end +-- table.save((jit and "tmc-" or "tma-")..font..".log",ra) -- bug in jit end return ra diff --git a/tex/context/base/font-otf.lua b/tex/context/base/font-otf.lua index eb28bc368..0a5d1cfea 100644 --- a/tex/context/base/font-otf.lua +++ b/tex/context/base/font-otf.lua @@ -48,7 +48,7 @@ local otf = fonts.handlers.otf otf.glists = { "gsub", "gpos" } -otf.version = 2.751 -- beware: also sync font-mis.lua +otf.version = 2.755 -- beware: also sync font-mis.lua otf.cache = containers.define("fonts", "otf", otf.version, true) local fontdata = fonts.hashes.identifiers @@ -687,15 +687,22 @@ actions["prepare glyphs"] = function(data,filename,raw) local glyph = cidglyphs[index] if glyph then local unicode = glyph.unicode +if unicode >= 0x00E000 and unicode <= 0x00F8FF then + unicode = -1 +elseif unicode >= 0x0F0000 and unicode <= 0x0FFFFD then + unicode = -1 +elseif unicode >= 0x100000 and unicode <= 0x10FFFD then + unicode = -1 +end local name = glyph.name or cidnames[index] - if not unicode or unicode == -1 or unicode >= criterium then + if not unicode or unicode == -1 then -- or unicode >= criterium then unicode = cidunicodes[index] end if unicode and descriptions[unicode] then report_otf("preventing glyph %a at index %H to overload unicode %U",name or "noname",index,unicode) unicode = -1 end - if not unicode or unicode == -1 or unicode >= criterium then + if not unicode or unicode == -1 then -- or unicode >= criterium then if not name then name = format("u%06X",private) end @@ -747,7 +754,7 @@ actions["prepare glyphs"] = function(data,filename,raw) if glyph then local unicode = glyph.unicode local name = glyph.name - if not unicode or unicode == -1 or unicode >= criterium then + if not unicode or unicode == -1 then -- or unicode >= criterium then unicode = private unicodes[name] = private if trace_private then @@ -809,6 +816,10 @@ end -- the next one is still messy but will get better when we have -- flattened map/enc tables in the font loader +-- the next one is not using a valid base for unicode privates +-- +-- PsuedoEncodeUnencoded(EncMap *map,struct ttfinfo *info) + actions["check encoding"] = function(data,filename,raw) local descriptions = data.descriptions local resources = data.resources @@ -825,6 +836,7 @@ actions["check encoding"] = function(data,filename,raw) -- local encname = lower(data.enc_name or raw.enc_name or mapdata.enc_name or "") local encname = lower(data.enc_name or mapdata.enc_name or "") local criterium = 0xFFFF -- for instance cambria has a lot of mess up there + local privateoffset = constructors.privateoffset -- end of messy @@ -832,81 +844,44 @@ actions["check encoding"] = function(data,filename,raw) if trace_loading then report_otf("checking embedded unicode map %a",encname) end - -- if false then - -- for unicode, index in next, unicodetoindex do -- altuni already covers this - -- if unicode <= criterium and not descriptions[unicode] then - -- local parent = indices[index] -- why nil? - -- if not parent then - -- report_otf("weird, unicode %U points to nowhere with index %H",unicode,index) - -- else - -- local parentdescription = descriptions[parent] - -- if parentdescription then - -- local altuni = parentdescription.altuni - -- if not altuni then - -- altuni = { { unicode = unicode } } - -- parentdescription.altuni = altuni - -- duplicates[parent] = { unicode } - -- else - -- local done = false - -- for i=1,#altuni do - -- if altuni[i].unicode == unicode then - -- done = true - -- break - -- end - -- end - -- if not done then - -- -- let's assume simple cjk reuse - -- insert(altuni,{ unicode = unicode }) - -- insert(duplicates[parent],unicode) - -- end - -- end - -- -- if trace_loading then - -- -- report_otf("weird, unicode %U points to nowhere with index %H",unicode,index) - -- -- end - -- else - -- report_otf("weird, unicode %U points to %U with index %H",unicode,index) - -- end - -- end - -- end - -- end - -- else - local hash = { } - for index, unicode in next, indices do -- indextounicode - hash[index] = descriptions[unicode] - end - local reported = { } - for unicode, index in next, unicodetoindex do - if not descriptions[unicode] then - local d = hash[index] + local reported = { } + -- we loop over the original unicode->index mapping but we + -- need to keep in mind that that one can have weird entries + -- so we need some extra checking + for maybeunicode, index in next, unicodetoindex do + if descriptions[maybeunicode] then + -- we ignore invalid unicodes (unicode = -1) (ff can map wrong to non private) + else + local unicode = indices[index] + if not unicode then + -- weird (cjk or so?) + elseif maybeunicode == unicode then + -- no need to add + elseif unicode > privateoffset then + -- we have a non-unicode + else + local d = descriptions[unicode] if d then - if d.unicode ~= unicode then - local c = d.copies - if c then - c[unicode] = true - else - d.copies = { [unicode] = true } - end + local c = d.copies + if c then + c[maybeunicode] = true + else + d.copies = { [maybeunicode] = true } end - elseif not reported[i] then + elseif index and not reported[index] then report_otf("missing index %i",index) - reported[i] = true + reported[index] = true end end end - for index, data in next, hash do -- indextounicode - data.copies = sortedkeys(data.copies) - end - for index, unicode in next, indices do -- indextounicode - local description = hash[index] - local copies = description.copies - if copies then - duplicates[unicode] = copies - description.copies = nil - else - report_otf("copies but no unicode parent %U",unicode) - end + end + for unicode, data in next, descriptions do + local d = data.copies + if d then + duplicates[unicode] = sortedkeys(d) + data.copies = nil end - -- end + end elseif properties.cidinfo then report_otf("warning: no unicode map, used cidmap %a",properties.cidinfo.usedname) else @@ -939,6 +914,7 @@ actions["add duplicates"] = function(data,filename,raw) report_otf("ignoring excessive duplicates of %U (n=%s)",unicode,nofduplicates) end else + -- local validduplicates = { } for i=1,nofduplicates do local u = d[i] if not descriptions[u] then @@ -957,16 +933,18 @@ actions["add duplicates"] = function(data,filename,raw) end -- todo: lookups etc end - if u > 0 then + if u > 0 then -- and local duplicate = table.copy(description) -- else packing problem duplicate.comment = format("copy of U+%05X", unicode) descriptions[u] = duplicate + -- validduplicates[#validduplicates+1] = u if trace_loading then report_otf("duplicating %U to %U with index %H (%s kerns)",unicode,u,description.index,n) end end end end + -- duplicates[unicode] = #validduplicates > 0 and validduplicates or nil end end end @@ -1197,10 +1175,16 @@ actions["reorganize subtables"] = function(data,filename,raw) elseif features then -- scripts, tag, ismac local f = { } + local o = { } for i=1,#features do local df = features[i] local tag = strip(lower(df.tag)) - local ft = f[tag] if not ft then ft = {} f[tag] = ft end + local ft = f[tag] + if not ft then + ft = { } + f[tag] = ft + o[#o+1] = tag + end local dscripts = df.scripts for i=1,#dscripts do local d = dscripts[i] @@ -1220,6 +1204,7 @@ actions["reorganize subtables"] = function(data,filename,raw) subtables = subtables, markclass = markclass, features = f, + order = o, } else lookups[name] = { diff --git a/tex/context/base/font-otn.lua b/tex/context/base/font-otn.lua index 75e95749c..25c750ae8 100644 --- a/tex/context/base/font-otn.lua +++ b/tex/context/base/font-otn.lua @@ -182,17 +182,18 @@ local tonode = nuts.tonode local tonut = nuts.tonut local getfield = nuts.getfield +local setfield = nuts.setfield local getnext = nuts.getnext local getprev = nuts.getprev local getid = nuts.getid local getattr = nuts.getattr +local setattr = nuts.setattr +local getprop = nuts.getprop +local setprop = nuts.setprop local getfont = nuts.getfont local getsubtype = nuts.getsubtype local getchar = nuts.getchar -local setfield = nuts.setfield -local setattr = nuts.setattr - local insert_node_after = nuts.insert_after local delete_node = nuts.delete local copy_node = nuts.copy @@ -234,13 +235,7 @@ local privateattribute = attributes.private -- of only some. local a_state = privateattribute('state') -local a_markbase = privateattribute('markbase') -local a_markmark = privateattribute('markmark') -local a_markdone = privateattribute('markdone') -- assigned at the injection end -local a_cursbase = privateattribute('cursbase') -local a_curscurs = privateattribute('curscurs') -local a_cursdone = privateattribute('cursdone') -local a_kernpair = privateattribute('kernpair') +local a_cursbase = privateattribute('cursbase') -- to be checked local a_ligacomp = privateattribute('ligacomp') -- assigned here (ideally it should be combined) local injections = nodes.injections @@ -249,9 +244,7 @@ local setcursive = injections.setcursive local setkern = injections.setkern local setpair = injections.setpair -local markonce = true local cursonce = true -local kernonce = true local fonthashes = fonts.hashes local fontdata = fonthashes.identifiers @@ -459,9 +452,9 @@ local function toligature(kind,lookupname,head,start,stop,char,markflag,discfoun baseindex = baseindex + componentindex componentindex = getcomponentindex(start) elseif not deletemarks then -- quite fishy - setattr(start,a_ligacomp,baseindex + (getattr(start,a_ligacomp) or componentindex)) + setprop(start,a_ligacomp,baseindex + (getprop(start,a_ligacomp) or componentindex)) if trace_marks then - logwarning("%s: keep mark %s, gets index %s",pref(kind,lookupname),gref(char),getattr(start,a_ligacomp)) + logwarning("%s: keep mark %s, gets index %s",pref(kind,lookupname),gref(char),getprop(start,a_ligacomp)) end head, current = insert_node_after(head,current,copy_node(start)) -- unlikely that mark has components elseif trace_marks then @@ -475,9 +468,9 @@ local function toligature(kind,lookupname,head,start,stop,char,markflag,discfoun while start and getid(start) == glyph_code do local char = getchar(start) if marks[char] then - setattr(start,a_ligacomp,baseindex + (getattr(start,a_ligacomp) or componentindex)) + setprop(start,a_ligacomp,baseindex + (getprop(start,a_ligacomp) or componentindex)) if trace_marks then - logwarning("%s: set mark %s, gets index %s",pref(kind,lookupname),gref(char),getattr(start,a_ligacomp)) + logwarning("%s: set mark %s, gets index %s",pref(kind,lookupname),gref(char),getprop(start,a_ligacomp)) end else break @@ -710,7 +703,7 @@ function handlers.gpos_mark2base(head,start,kind,lookupname,markanchors,sequence if al[anchor] then local ma = markanchors[anchor] if ma then - local dx, dy, bound = setmark(start,base,tfmdata.parameters.factor,rlmode,ba,ma) + local dx, dy, bound = setmark(start,base,tfmdata.parameters.factor,rlmode,ba,ma,characters[basechar]) if trace_marks then logprocess("%s, anchor %s, bound %s: anchoring mark %s to basechar %s => (%p,%p)", pref(kind,lookupname),anchor,bound,gref(markchar),gref(basechar),dx,dy) @@ -759,7 +752,7 @@ function handlers.gpos_mark2ligature(head,start,kind,lookupname,markanchors,sequ end end end - local index = getattr(start,a_ligacomp) + local index = getprop(start,a_ligacomp) local baseanchors = descriptions[basechar] if baseanchors then baseanchors = baseanchors.anchors @@ -773,7 +766,7 @@ function handlers.gpos_mark2ligature(head,start,kind,lookupname,markanchors,sequ if ma then ba = ba[index] if ba then - local dx, dy, bound = setmark(start,base,tfmdata.parameters.factor,rlmode,ba,ma) -- index + local dx, dy, bound = setmark(start,base,tfmdata.parameters.factor,rlmode,ba,ma,characters[basechar]) -- index if trace_marks then logprocess("%s, anchor %s, index %s, bound %s: anchoring mark %s to baselig %s at index %s => (%p,%p)", pref(kind,lookupname),anchor,index,bound,gref(markchar),gref(basechar),index,dx,dy) @@ -809,10 +802,10 @@ function handlers.gpos_mark2mark(head,start,kind,lookupname,markanchors,sequence local markchar = getchar(start) if marks[markchar] then local base = getprev(start) -- [glyph] [basemark] [start=mark] - local slc = getattr(start,a_ligacomp) + local slc = getprop(start,a_ligacomp) if slc then -- a rather messy loop ... needs checking with husayni while base do - local blc = getattr(base,a_ligacomp) + local blc = getprop(base,a_ligacomp) if blc and blc ~= slc then base = getprev(base) else @@ -833,7 +826,7 @@ function handlers.gpos_mark2mark(head,start,kind,lookupname,markanchors,sequence if al[anchor] then local ma = markanchors[anchor] if ma then - local dx, dy, bound = setmark(start,base,tfmdata.parameters.factor,rlmode,ba,ma,true) + local dx, dy, bound = setmark(start,base,tfmdata.parameters.factor,rlmode,ba,ma,characters[basechar]) if trace_marks then logprocess("%s, anchor %s, bound %s: anchoring mark %s to basemark %s => (%p,%p)", pref(kind,lookupname),anchor,bound,gref(markchar),gref(basechar),dx,dy) @@ -861,7 +854,7 @@ function handlers.gpos_mark2mark(head,start,kind,lookupname,markanchors,sequence end function handlers.gpos_cursive(head,start,kind,lookupname,exitanchors,sequence) -- to be checked - local alreadydone = cursonce and getattr(start,a_cursbase) + local alreadydone = cursonce and getprop(start,a_cursbase) if not alreadydone then local done = false local startchar = getchar(start) @@ -1343,7 +1336,7 @@ function chainprocs.gpos_mark2base(head,start,stop,kind,chainname,currentcontext if al[anchor] then local ma = markanchors[anchor] if ma then - local dx, dy, bound = setmark(start,base,tfmdata.parameters.factor,rlmode,ba,ma) + local dx, dy, bound = setmark(start,base,tfmdata.parameters.factor,rlmode,ba,ma,characters[basechar]) if trace_marks then logprocess("%s, anchor %s, bound %s: anchoring mark %s to basechar %s => (%p,%p)", cref(kind,chainname,chainlookupname,lookupname),anchor,bound,gref(markchar),gref(basechar),dx,dy) @@ -1399,7 +1392,7 @@ function chainprocs.gpos_mark2ligature(head,start,stop,kind,chainname,currentcon end end -- todo: like marks a ligatures hash - local index = getattr(start,a_ligacomp) + local index = getprop(start,a_ligacomp) local baseanchors = descriptions[basechar].anchors if baseanchors then local baseanchors = baseanchors['baselig'] @@ -1411,7 +1404,7 @@ function chainprocs.gpos_mark2ligature(head,start,stop,kind,chainname,currentcon if ma then ba = ba[index] if ba then - local dx, dy, bound = setmark(start,base,tfmdata.parameters.factor,rlmode,ba,ma) -- index + local dx, dy, bound = setmark(start,base,tfmdata.parameters.factor,rlmode,ba,ma,characters[basechar]) if trace_marks then logprocess("%s, anchor %s, bound %s: anchoring mark %s to baselig %s at index %s => (%p,%p)", cref(kind,chainname,chainlookupname,lookupname),anchor,a or bound,gref(markchar),gref(basechar),index,dx,dy) @@ -1441,62 +1434,57 @@ end function chainprocs.gpos_mark2mark(head,start,stop,kind,chainname,currentcontext,lookuphash,currentlookup,chainlookupname) local markchar = getchar(start) if marks[markchar] then - -- local alreadydone = markonce and getattr(start,a_markmark) - -- if not alreadydone then - -- local markanchors = descriptions[markchar].anchors markanchors = markanchors and markanchors.mark - local subtables = currentlookup.subtables - local lookupname = subtables[1] - local markanchors = lookuphash[lookupname] - if markanchors then - markanchors = markanchors[markchar] - end - if markanchors then - local base = getprev(start) -- [glyph] [basemark] [start=mark] - local slc = getattr(start,a_ligacomp) - if slc then -- a rather messy loop ... needs checking with husayni - while base do - local blc = getattr(base,a_ligacomp) - if blc and blc ~= slc then - base = getprev(base) - else - break - end + -- local markanchors = descriptions[markchar].anchors markanchors = markanchors and markanchors.mark + local subtables = currentlookup.subtables + local lookupname = subtables[1] + local markanchors = lookuphash[lookupname] + if markanchors then + markanchors = markanchors[markchar] + end + if markanchors then + local base = getprev(start) -- [glyph] [basemark] [start=mark] + local slc = getprop(start,a_ligacomp) + if slc then -- a rather messy loop ... needs checking with husayni + while base do + local blc = getprop(base,a_ligacomp) + if blc and blc ~= slc then + base = getprev(base) + else + break end end - if base and getid(base) == glyph_code and getfont(base) == currentfont and getsubtype(base)<256 then -- subtype test can go - local basechar = getchar(base) - local baseanchors = descriptions[basechar].anchors + end + if base and getid(base) == glyph_code and getfont(base) == currentfont and getsubtype(base)<256 then -- subtype test can go + local basechar = getchar(base) + local baseanchors = descriptions[basechar].anchors + if baseanchors then + baseanchors = baseanchors['basemark'] if baseanchors then - baseanchors = baseanchors['basemark'] - if baseanchors then - local al = anchorlookups[lookupname] - for anchor,ba in next, baseanchors do - if al[anchor] then - local ma = markanchors[anchor] - if ma then - local dx, dy, bound = setmark(start,base,tfmdata.parameters.factor,rlmode,ba,ma,true) - if trace_marks then - logprocess("%s, anchor %s, bound %s: anchoring mark %s to basemark %s => (%p,%p)", - cref(kind,chainname,chainlookupname,lookupname),anchor,bound,gref(markchar),gref(basechar),dx,dy) - end - return head, start, true + local al = anchorlookups[lookupname] + for anchor,ba in next, baseanchors do + if al[anchor] then + local ma = markanchors[anchor] + if ma then + local dx, dy, bound = setmark(start,base,tfmdata.parameters.factor,rlmode,ba,ma,characters[basechar]) + if trace_marks then + logprocess("%s, anchor %s, bound %s: anchoring mark %s to basemark %s => (%p,%p)", + cref(kind,chainname,chainlookupname,lookupname),anchor,bound,gref(markchar),gref(basechar),dx,dy) end + return head, start, true end end - if trace_bugs then - logwarning("%s: no matching anchors for mark %s and basemark %s",gref(kind,chainname,chainlookupname,lookupname),gref(markchar),gref(basechar)) - end + end + if trace_bugs then + logwarning("%s: no matching anchors for mark %s and basemark %s",gref(kind,chainname,chainlookupname,lookupname),gref(markchar),gref(basechar)) end end - elseif trace_bugs then - logwarning("%s: prev node is no mark",cref(kind,chainname,chainlookupname,lookupname)) end elseif trace_bugs then - logwarning("%s: mark %s has no anchors",cref(kind,chainname,chainlookupname,lookupname),gref(markchar)) + logwarning("%s: prev node is no mark",cref(kind,chainname,chainlookupname,lookupname)) end - -- elseif trace_marks and trace_details then - -- logprocess("%s, mark %s is already bound (n=%s), ignoring mark2mark",pref(kind,lookupname),gref(markchar),alreadydone) - -- end + elseif trace_bugs then + logwarning("%s: mark %s has no anchors",cref(kind,chainname,chainlookupname,lookupname),gref(markchar)) + end elseif trace_bugs then logwarning("%s: mark %s is no mark",cref(kind,chainname,chainlookupname),gref(markchar)) end @@ -1504,7 +1492,7 @@ function chainprocs.gpos_mark2mark(head,start,stop,kind,chainname,currentcontext end function chainprocs.gpos_cursive(head,start,stop,kind,chainname,currentcontext,lookuphash,currentlookup,chainlookupname) - local alreadydone = cursonce and getattr(start,a_cursbase) + local alreadydone = cursonce and getprop(start,a_cursbase) if not alreadydone then local startchar = getchar(start) local subtables = currentlookup.subtables @@ -2065,14 +2053,21 @@ local autofeatures = fonts.analyzers.features -- was: constants local function initialize(sequence,script,language,enabled) local features = sequence.features if features then - for kind, scripts in next, features do - local valid = enabled[kind] - if valid then - local languages = scripts[script] or scripts[wildcard] - if languages and (languages[language] or languages[wildcard]) then - return { valid, autofeatures[kind] or false, sequence.chain or 0, kind, sequence } + local order = sequence.order + if order then + for i=1,#order do -- + local kind = order[i] -- + local valid = enabled[kind] + if valid then + local scripts = features[kind] -- + local languages = scripts[script] or scripts[wildcard] + if languages and (languages[language] or languages[wildcard]) then + return { valid, autofeatures[kind] or false, sequence.chain or 0, kind, sequence } + end end end + else + -- can't happen end end return false @@ -2101,19 +2096,12 @@ function otf.dataset(tfmdata,font) -- generic variant, overloaded in context } rs[language] = rl local sequences = tfmdata.resources.sequences --- setmetatableindex(rl, function(t,k) --- if type(k) == "number" then --- local v = enabled and initialize(sequences[k],script,language,enabled) --- t[k] = v --- return v --- end --- end) -for s=1,#sequences do - local v = enabled and initialize(sequences[s],script,language,enabled) - if v then - rl[#rl+1] = v - end -end + for s=1,#sequences do + local v = enabled and initialize(sequences[s],script,language,enabled) + if v then + rl[#rl+1] = v + end + end end return rl end @@ -2141,7 +2129,7 @@ end -- attr = attr or false -- -- local a = getattr(start,0) --- if (a == attr and (not attribute or getattr(start,a_state) == attribute)) or (not attribute or getattr(start,a_state) == attribute) then +-- if (a == attr and (not attribute or getprop(start,a_state) == attribute)) or (not attribute or getprop(start,a_state) == attribute) then -- -- the action -- end @@ -2263,9 +2251,9 @@ local function featuresprocessor(head,font,attr) if id == glyph_code and getfont(start) == font and getsubtype(start) < 256 then local a = getattr(start,0) if a then - a = (a == attr) and (not attribute or getattr(start,a_state) == attribute) + a = (a == attr) and (not attribute or getprop(start,a_state) == attribute) else - a = not attribute or getattr(start,a_state) == attribute + a = not attribute or getprop(start,a_state) == attribute end if a then local lookupmatch = lookupcache[getchar(start)] @@ -2299,9 +2287,9 @@ local function featuresprocessor(head,font,attr) -- setfield(next,"prev",prev) local a = getattr(prev,0) if a then - a = (a == attr) and (not attribute or getattr(prev,a_state) == attribute) + a = (a == attr) and (not attribute or getprop(prev,a_state) == attribute) else - a = not attribute or getattr(prev,a_state) == attribute + a = not attribute or getprop(prev,a_state) == attribute end if a then local lookupmatch = lookupcache[getchar(prev)] @@ -2326,9 +2314,9 @@ local function featuresprocessor(head,font,attr) if getfont(start) == font and getsubtype(start) < 256 then local a = getattr(start,0) if a then - a = (a == attr) and (not attribute or getattr(start,a_state) == attribute) + a = (a == attr) and (not attribute or getprop(start,a_state) == attribute) else - a = not attribute or getattr(start,a_state) == attribute + a = not attribute or getprop(start,a_state) == attribute end if a then local lookupmatch = lookupcache[getchar(start)] @@ -2424,9 +2412,9 @@ elseif typ == "gpos_single" or typ == "gpos_pair" then if id == glyph_code and getfont(start) == font and getsubtype(start) < 256 then local a = getattr(start,0) if a then - a = (a == attr) and (not attribute or getattr(start,a_state) == attribute) + a = (a == attr) and (not attribute or getprop(start,a_state) == attribute) else - a = not attribute or getattr(start,a_state) == attribute + a = not attribute or getprop(start,a_state) == attribute end if a then for i=1,ns do @@ -2472,9 +2460,9 @@ elseif typ == "gpos_single" or typ == "gpos_pair" then -- setfield(next,"prev",prev) local a = getattr(prev,0) if a then - a = (a == attr) and (not attribute or getattr(prev,a_state) == attribute) + a = (a == attr) and (not attribute or getprop(prev,a_state) == attribute) else - a = not attribute or getattr(prev,a_state) == attribute + a = not attribute or getprop(prev,a_state) == attribute end if a then for i=1,ns do @@ -2507,9 +2495,9 @@ elseif typ == "gpos_single" or typ == "gpos_pair" then if getfont(start) == font and getsubtype(start) < 256 then local a = getattr(start,0) if a then - a = (a == attr) and (not attribute or getattr(start,a_state) == attribute) + a = (a == attr) and (not attribute or getprop(start,a_state) == attribute) else - a = not attribute or getattr(start,a_state) == attribute + a = not attribute or getprop(start,a_state) == attribute end if a then for i=1,ns do diff --git a/tex/context/base/font-otp.lua b/tex/context/base/font-otp.lua index 217bb7535..c80ee86ae 100644 --- a/tex/context/base/font-otp.lua +++ b/tex/context/base/font-otp.lua @@ -407,6 +407,14 @@ local function packdata(data) features[script] = pack_normal(feature) end end + local order = sequence.order + if order then + sequence.order = pack_indexed(order) + end + local markclass = sequence.markclass + if markclass then + sequence.markclass = pack_boolean(markclass) + end end end local lookups = resources.lookups @@ -825,6 +833,20 @@ local function unpackdata(data) end end end + local order = feature.order + if order then + local tv = tables[order] + if tv then + feature.order = tv + end + end + local markclass = feature.markclass + if markclass then + local tv = tables[markclass] + if tv then + feature.markclass = tv + end + end end end local lookups = resources.lookups diff --git a/tex/context/base/font-otx.lua b/tex/context/base/font-otx.lua index b7d2ae0bc..dc0469e39 100644 --- a/tex/context/base/font-otx.lua +++ b/tex/context/base/font-otx.lua @@ -37,13 +37,12 @@ local getfield = nuts.getfield local getnext = nuts.getnext local getprev = nuts.getprev local getid = nuts.getid -local getattr = nuts.getattr +local getprop = nuts.getprop +local setprop = nuts.setprop local getfont = nuts.getfont local getsubtype = nuts.getsubtype local getchar = nuts.getchar -local setattr = nuts.setattr - local traverse_id = nuts.traverse_id local traverse_node_list = nuts.traverse local end_of_math = nuts.end_of_math @@ -124,34 +123,34 @@ function analyzers.setstate(head,font) -- we can skip math if d then if d.class == "mark" then done = true - setattr(current,a_state,s_mark) + setprop(current,a_state,s_mark) elseif useunicodemarks and categories[char] == "mn" then done = true - setattr(current,a_state,s_mark) + setprop(current,a_state,s_mark) elseif n == 0 then first, last, n = current, current, 1 - setattr(current,a_state,s_init) + setprop(current,a_state,s_init) else last, n = current, n+1 - setattr(current,a_state,s_medi) + setprop(current,a_state,s_medi) end else -- finish if first and first == last then - setattr(last,a_state,s_isol) + setprop(last,a_state,s_isol) elseif last then - setattr(last,a_state,s_fina) + setprop(last,a_state,s_fina) end first, last, n = nil, nil, 0 end elseif id == disc_code then -- always in the middle - setattr(current,a_state,s_medi) + setprop(current,a_state,s_medi) last = current else -- finish if first and first == last then - setattr(last,a_state,s_isol) + setprop(last,a_state,s_isol) elseif last then - setattr(last,a_state,s_fina) + setprop(last,a_state,s_fina) end first, last, n = nil, nil, 0 if id == math_code then @@ -161,9 +160,9 @@ function analyzers.setstate(head,font) -- we can skip math current = getnext(current) end if first and first == last then - setattr(last,a_state,s_isol) + setprop(last,a_state,s_isol) elseif last then - setattr(last,a_state,s_fina) + setprop(last,a_state,s_fina) end return head, done end @@ -279,92 +278,92 @@ function methods.arab(head,font,attr) current = tonut(current) while current do local id = getid(current) - if id == glyph_code and getfont(current) == font and getsubtype(current)<256 and not getattr(current,a_state) then + if id == glyph_code and getfont(current) == font and getsubtype(current)<256 and not getprop(current,a_state) then done = true local char = getchar(current) local classifier = classifiers[char] if not classifier then if last then if c_last == s_medi or c_last == s_fina then - setattr(last,a_state,s_fina) + setprop(last,a_state,s_fina) else warning(last,"fina") - setattr(last,a_state,s_error) + setprop(last,a_state,s_error) end first, last = nil, nil elseif first then if c_first == s_medi or c_first == s_fina then - setattr(first,a_state,s_isol) + setprop(first,a_state,s_isol) else warning(first,"isol") - setattr(first,a_state,s_error) + setprop(first,a_state,s_error) end first = nil end elseif classifier == s_mark then - setattr(current,a_state,s_mark) + setprop(current,a_state,s_mark) elseif classifier == s_isol then if last then if c_last == s_medi or c_last == s_fina then - setattr(last,a_state,s_fina) + setprop(last,a_state,s_fina) else warning(last,"fina") - setattr(last,a_state,s_error) + setprop(last,a_state,s_error) end first, last = nil, nil elseif first then if c_first == s_medi or c_first == s_fina then - setattr(first,a_state,s_isol) + setprop(first,a_state,s_isol) else warning(first,"isol") - setattr(first,a_state,s_error) + setprop(first,a_state,s_error) end first = nil end - setattr(current,a_state,s_isol) + setprop(current,a_state,s_isol) elseif classifier == s_medi then if first then last = current c_last = classifier - setattr(current,a_state,s_medi) + setprop(current,a_state,s_medi) else - setattr(current,a_state,s_init) + setprop(current,a_state,s_init) first = current c_first = classifier end elseif classifier == s_fina then if last then - if getattr(last,a_state) ~= s_init then - setattr(last,a_state,s_medi) + if getprop(last,a_state) ~= s_init then + setprop(last,a_state,s_medi) end - setattr(current,a_state,s_fina) + setprop(current,a_state,s_fina) first, last = nil, nil elseif first then - -- if getattr(first,a_state) ~= s_init then + -- if getprop(first,a_state) ~= s_init then -- -- needs checking - -- setattr(first,a_state,s_medi) + -- setprop(first,a_state,s_medi) -- end - setattr(current,a_state,s_fina) + setprop(current,a_state,s_fina) first = nil else - setattr(current,a_state,s_isol) + setprop(current,a_state,s_isol) end else -- classifier == s_rest - setattr(current,a_state,s_rest) + setprop(current,a_state,s_rest) if last then if c_last == s_medi or c_last == s_fina then - setattr(last,a_state,s_fina) + setprop(last,a_state,s_fina) else warning(last,"fina") - setattr(last,a_state,s_error) + setprop(last,a_state,s_error) end first, last = nil, nil elseif first then if c_first == s_medi or c_first == s_fina then - setattr(first,a_state,s_isol) + setprop(first,a_state,s_isol) else warning(first,"isol") - setattr(first,a_state,s_error) + setprop(first,a_state,s_error) end first = nil end @@ -372,18 +371,18 @@ function methods.arab(head,font,attr) else if last then if c_last == s_medi or c_last == s_fina then - setattr(last,a_state,s_fina) + setprop(last,a_state,s_fina) else warning(last,"fina") - setattr(last,a_state,s_error) + setprop(last,a_state,s_error) end first, last = nil, nil elseif first then if c_first == s_medi or c_first == s_fina then - setattr(first,a_state,s_isol) + setprop(first,a_state,s_isol) else warning(first,"isol") - setattr(first,a_state,s_error) + setprop(first,a_state,s_error) end first = nil end @@ -395,17 +394,17 @@ function methods.arab(head,font,attr) end if last then if c_last == s_medi or c_last == s_fina then - setattr(last,a_state,s_fina) + setprop(last,a_state,s_fina) else warning(last,"fina") - setattr(last,a_state,s_error) + setprop(last,a_state,s_error) end elseif first then if c_first == s_medi or c_first == s_fina then - setattr(first,a_state,s_isol) + setprop(first,a_state,s_isol) else warning(first,"isol") - setattr(first,a_state,s_error) + setprop(first,a_state,s_error) end end return head, done diff --git a/tex/context/base/font-sel.lua b/tex/context/base/font-sel.lua index 2881917eb..86300c2db 100644 --- a/tex/context/base/font-sel.lua +++ b/tex/context/base/font-sel.lua @@ -174,28 +174,58 @@ local names = { ["heavyitalic"] = { "heavyitalic" }, }, ["default"] = { -- weight, width, italic - ["thin"] = { weight = { 100, 200, 300, 400, 500 }, width = 5, italic = false }, - ["extralight"] = { weight = { 200, 100, 300, 400, 500 }, width = 5, italic = false }, - ["light"] = { weight = { 300, 200, 100, 400, 500 }, width = 5, italic = false }, - ["regular"] = { weight = { 400, 500, 300, 200, 100 }, width = 5, italic = false }, - ["italic"] = { weight = { 400, 500, 300, 200, 100 }, width = 5, italic = true }, - ["medium"] = { weight = { 500, 400, 300, 200, 100 }, width = 5, italic = false }, - ["demibold"] = { weight = { 600, 700, 800, 900 }, width = 5, italic = false }, - ["bold"] = { weight = { 700, 600, 800, 900 }, width = 5, italic = false }, - ["bolditalic"] = { weight = { 700, 600, 800, 900 }, width = 5, italic = true }, - ["smallcaps"] = { weight = { 400, 500, 300, 200, 100 }, width = 5, italic = false }, - ["heavy"] = { weight = { 800, 900, 700, 600 }, width = 5, italic = false }, - ["black"] = { weight = { 900, 800, 700, 600 }, width = 5, italic = false }, + ["thin"] = { weight = { 100, 200, 300, 400, 500 }, width = 5, italic = false }, + ["thinitalic"] = { weight = { 100, 200, 300, 400, 500 }, width = 5, italic = true }, + ["extralight"] = { weight = { 200, 100, 300, 400, 500 }, width = 5, italic = false }, + ["extralightitalic"] = { weight = { 200, 100, 300, 400, 500 }, width = 5, italic = true }, + ["light"] = { weight = { 300, 200, 100, 400, 500 }, width = 5, italic = false }, + ["lightitalic"] = { weight = { 300, 200, 100, 400, 500 }, width = 5, italic = true }, + ["regular"] = { weight = { 400, 500, 300, 200, 100 }, width = 5, italic = false }, + ["italic"] = { weight = { 400, 500, 300, 200, 100 }, width = 5, italic = true }, + ["medium"] = { weight = { 500, 400, 300, 200, 100 }, width = 5, italic = false }, + ["mediumitalic"] = { weight = { 500, 400, 300, 200, 100 }, width = 5, italic = true }, + ["demibold"] = { weight = { 600, 700, 800, 900 }, width = 5, italic = false }, + ["demibolditalic"] = { weight = { 600, 700, 800, 900 }, width = 5, italic = true }, + ["bold"] = { weight = { 700, 600, 800, 900 }, width = 5, italic = false }, + ["bolditalic"] = { weight = { 700, 600, 800, 900 }, width = 5, italic = true }, + ["extrabold"] = { weight = { 800, 900, 700, 600 }, width = 5, italic = false }, + ["extrabolditalic"] = { weight = { 800, 900, 700, 600 }, width = 5, italic = true }, + ["heavy"] = { weight = { 900, 800, 700, 600 }, width = 5, italic = false }, + ["heavyitalic"] = { weight = { 900, 800, 700, 600 }, width = 5, italic = true }, } } -names.simplefonts.slanted = names.simplefonts.italic -names.simplefonts.boldslanted = names.simplefonts.bolditalic +-- simplefonts synonyms -names.default.normal = names.default.regular -names.default.slanted = names.default.italic -names.default.semibold = names.default.demibold -names.default.boldslanted = names.default.bolditalic +names.simplefonts.slanted = names.simplefonts.italic +names.simplefonts.boldslanted = names.simplefonts.bolditalic + +-- default synonyms + +names.default.ultralight = names.default.extralight +names.default.semibold = names.default.demibold +names.default.ultrabold = names.default.extrabold +names.default.black = names.default.heavy + +names.default.ultralightitalic = names.default.extralightitalic +names.default.semibolditalic = names.default.demibolditalic +names.default.ultrabolditalic = names.default.extrabolditalic +names.default.blackitalic = names.default.heavyitalic + +names.default.thinslanted = names.default.thinitalic +names.default.extralightslanted = names.default.extralightitalic +names.default.ultralightslanted = names.default.extralightitalic +names.default.lightslanted = names.default.lightitalic +names.default.slanted = names.default.italic +names.default.demiboldslanted = names.default.demibolditalic +names.default.semiboldslanted = names.default.demibolditalic +names.default.boldslanted = names.default.bolditalic +names.default.extraboldslanted = names.default.extrabolditalic +names.default.ultraboldslanted = names.default.extrabolditalic +names.default.heavyslanted = names.default.heavyitalic +names.default.blackslanted = names.default.heavyitalic + +names.default.smallcaps = names.default.regular local mathsettings = { ["asanamath"] = { @@ -494,7 +524,7 @@ local function definefontsynonym(data,alternative,index,fallback) end for _, entry in next, fontdata do local designsize = entry["designsize"] or 100 - if designsize == 100 or designsize == 120 or designsize == 0 or #fontdata == 1 then + if designsize == 100 or designsize == 110 or designsize == 120 or designsize == 0 or #fontdata == 1 then local filepath, filename = splitbase(entry["filename"]) registerdesignsizes( fontfile, "default", filename ) break @@ -600,7 +630,7 @@ local function definemathfontfallback(data,alternative,index) for _, entry in next, fontdata do local filename = entry["filename"] local designsize = entry["designsize"] or 100 - if designsize == 100 or designsize == 120 or designsize == 0 or #fontdata == 1 then + if designsize == 100 or designsize == 110 or designsize == 120 or designsize == 0 or #fontdata == 1 then context.definefontfallback( { fallback }, { formatters["file:%s*%s"](filename,features) }, { range }, { rscale = rscale, check = check, force = force, offset = offset } ) break end diff --git a/tex/context/base/font-sel.mkvi b/tex/context/base/font-sel.mkvi index 3d4dc6807..0b1d10c51 100644 --- a/tex/context/base/font-sel.mkvi +++ b/tex/context/base/font-sel.mkvi @@ -1,6 +1,6 @@ %D \module %D [ file=font-sel, -%D version=2013.10.19, +%D version=2014.03.10, %D title=\CONTEXT\ User Module, %D subtitle=Selectfont, %D author=Wolfgang Schuster, @@ -115,11 +115,18 @@ % unknown preset \fi} -\definefontfamilypreset [range:chinese] [\c!range={cjkcompatibilityforms,cjkcompatibilityideographs,cjkcompatibilityideographssupplement,cjkradicalssupplement,cjkstrokes,cjksymbolsandpunctuation,cjkunifiedideographs,cjkunifiedideographsextensiona,cjkunifiedideographsextensionb,halfwidthandfullwidthforms,verticalforms,bopomofo,bopomofoextended}] -\definefontfamilypreset [range:japanese] [\c!range={cjkcompatibilityforms,cjkcompatibilityideographs,cjkcompatibilityideographssupplement,cjkradicalssupplement,cjkstrokes,cjksymbolsandpunctuation,cjkunifiedideographs,cjkunifiedideographsextensiona,cjkunifiedideographsextensionb,halfwidthandfullwidthforms,verticalforms,hiragana,katakana}] -\definefontfamilypreset [range:korean] [\c!range={cjkcompatibilityforms,cjkcompatibilityideographs,cjkcompatibilityideographssupplement,cjkradicalssupplement,cjkstrokes,cjksymbolsandpunctuation,cjkunifiedideographs,cjkunifiedideographsextensiona,cjkunifiedideographsextensionb,halfwidthandfullwidthforms,verticalforms,hangulcompatibilityjamo,hanguljamo,hanguljamoextendeda,hanguljamoextendedb,hangulsyllables}] -\definefontfamilypreset [range:cyrillic] [\c!range={cyrillic,cyrillicextendeda,cyrillicextendedb,cyrillicsupplement}] -\definefontfamilypreset [range:greek] [\c!range={greekandcoptic,greekextended,ancientgreeknumbers}] +%definefontfamilypreset [range:chinese] [\c!range={cjkcompatibilityforms,cjkcompatibilityideographs,cjkcompatibilityideographssupplement,cjkradicalssupplement,cjkstrokes,cjksymbolsandpunctuation,cjkunifiedideographs,cjkunifiedideographsextensiona,cjkunifiedideographsextensionb,halfwidthandfullwidthforms,verticalforms,bopomofo,bopomofoextended}] +%definefontfamilypreset [range:japanese] [\c!range={cjkcompatibilityforms,cjkcompatibilityideographs,cjkcompatibilityideographssupplement,cjkradicalssupplement,cjkstrokes,cjksymbolsandpunctuation,cjkunifiedideographs,cjkunifiedideographsextensiona,cjkunifiedideographsextensionb,halfwidthandfullwidthforms,verticalforms,hiragana,katakana}] +%definefontfamilypreset [range:korean] [\c!range={cjkcompatibilityforms,cjkcompatibilityideographs,cjkcompatibilityideographssupplement,cjkradicalssupplement,cjkstrokes,cjksymbolsandpunctuation,cjkunifiedideographs,cjkunifiedideographsextensiona,cjkunifiedideographsextensionb,halfwidthandfullwidthforms,verticalforms,hangulcompatibilityjamo,hanguljamo,hanguljamoextendeda,hanguljamoextendedb,hangulsyllables}] +%definefontfamilypreset [range:cyrillic] [\c!range={cyrillic,cyrillicextendeda,cyrillicextendedb,cyrillicsupplement}] +%definefontfamilypreset [range:greek] [\c!range={greekandcoptic,greekextended,ancientgreeknumbers}] + +\definefontfamilypreset [range:chinese] [\c!range={0x02E80-0x02EFF,0x03000-0x031EF,0x03300-0x09FFF,0x0F900-0x0FFEF,0x20000-0x2A6DF,0x2F800-0x2FA1F,0x03100-0x0312F,0x031A0-0x031BF}] +\definefontfamilypreset [range:japanese] [\c!range={0x02E80-0x02EFF,0x03000-0x031EF,0x03300-0x09FFF,0x0F900-0x0FFEF,0x20000-0x2A6DF,0x2F800-0x2FA1F,0x03040-0x0309F,0x030A0-0x030FF}] +\definefontfamilypreset [range:korean] [\c!range={0x02E80-0x02EFF,0x03000-0x031EF,0x03300-0x09FFF,0x0F900-0x0FFEF,0x20000-0x2A6DF,0x2F800-0x2FA1F,0x01100-0x011FF,0x03130-0x0318F,0x0A960-0x0D7FF}] +\definefontfamilypreset [range:cyrillic] [\c!range={0x00400-0x0052F,0x02DE0-0x02DFF,0x0A640-0x0A69F}] +\definefontfamilypreset [range:greek] [\c!range={0x00370-0x003FF,0x01F00-0x01FFF,0x10140-0x1018F}] +\definefontfamilypreset [range:hebrew] [\c!range={0x00590-0x005FF,0x0FB00-0x0FB4F}] \definefontfamilypreset [math:digitsnormal] [\c!range=digitsnormal] \definefontfamilypreset [math:digitsbold] [\c!range=digitsnormal,\c!offset=digitsbold,\s!tf=style:bold] @@ -219,13 +226,13 @@ %D \stoptyping %D %D When a document contains different languages and the global font lacks some characters -%D for one language, one could set a different font where these charcters are taken from. +%D for one language, one could set a different font where these characters are taken from. %D This fallback font (there can be more than one for a certain style) could be set with %D the \tex{definefallbackfamily} command which takes the same argument as %D the \tex{definefontfamily} command. %D %D \starttyping -%D \definefallbackfamily [mainface] [serif] [DejaVu Serif] [range=cyrillic,force=yes] +%D \definefallbackfamily [mainface] [serif] [DejaVu Serif] [range=cyrillic] %D \definefontfamily [mainface] [serif] [TeX Gyre Pagella] %D %D \setupbodyfont[mainface] @@ -364,4 +371,4 @@ \c!smallcapsfeatures=\s!smallcaps, \c!style=\s!rm] -\protect +\protect
\ No newline at end of file diff --git a/tex/context/base/font-set.mkvi b/tex/context/base/font-set.mkvi index 0e2058c18..f94d6c86e 100644 --- a/tex/context/base/font-set.mkvi +++ b/tex/context/base/font-set.mkvi @@ -39,27 +39,36 @@ % \enablemode[lmmath] +\let\m_font_fallback_name\empty + \def\font_preloads_reset_nullfont % this is needed because some macro packages (tikz) misuse \nullfont {\dorecurse\plusseven{\fontdimen\recurselevel\nullfont\zeropoint}% keep en eye on this as: \ctxcommand{resetnullfont()}% in luatex 0.70 this will also do the previous \globallet\font_preloads_reset_nullfont\relax} +\def\font_preload_check_mode + {\doifmodeelse{lmmath} + {\def\m_font_fallback_name{modern-designsize-virtual}}% this will stay + {\def\m_font_fallback_name{modern-designsize}}% % this might become 'modern' + \glet\font_preload_check_mode\relax} + \def\font_preload_default_fonts {\font_preloads_reset - \doifmodeelse{lmmath} - {\setupbodyfont[modern-designsize-virtual,\fontstyle,\fontbody]}% this will stay - {\setupbodyfont[modern-designsize,\fontstyle,\fontbody]}% % this might become 'modern' - \showmessage\m!fonts6{fallback modern \fontstyle\normalspace\normalizedbodyfontsize}} + \font_preload_check_mode + \setupbodyfont[\m_font_fallback_name,\fontstyle,\fontbody]% + \showmessage\m!fonts6{fallback \m_font_fallback_name\space \fontstyle\normalspace\normalizedbodyfontsize}} \def\font_preload_default_fonts_mm - {\writestatus\m!fonts{preloading latin modern fonts (math)}% - \definetypeface[\fontclass][\s!mm][\s!math][modern][\s!default]% - \showmessage\m!fonts6{fallback modern mm \normalizedbodyfontsize}} + {\font_preload_check_mode + \writestatus\m!fonts{preloading \m_font_fallback_name\space (math)}% + \definetypeface[\fontclass][\s!mm][\s!math][\m_font_fallback_name][\s!default]% + \showmessage\m!fonts6{fallback \m_font_fallback_name\space mm \normalizedbodyfontsize}} \def\font_preload_default_fonts_tt - {\writestatus\m!fonts{preloading latin modern fonts (mono)}% - \definetypeface[\fontclass][\s!tt][\s!mono][modern][\s!default]% - \showmessage\m!fonts6{fallback modern tt \normalizedbodyfontsize}} + {\font_preload_check_mode + \writestatus\m!fonts{preloading \m_font_fallback_name\space (mono)}% + \definetypeface[\fontclass][\s!tt][\s!mono][\m_font_fallback_name][\s!default]% + \showmessage\m!fonts6{fallback \m_font_fallback_name\space tt \normalizedbodyfontsize}} \def\font_preloads_reset {\glet\font_preload_default_fonts \relax diff --git a/tex/context/base/font-syn.lua b/tex/context/base/font-syn.lua index 6296f088e..18ed46a2f 100644 --- a/tex/context/base/font-syn.lua +++ b/tex/context/base/font-syn.lua @@ -396,7 +396,7 @@ function filters.afm(name) if key and #key > 0 then hash[lower(key)] = value end - if find(line,"StartCharMetrics") then + if find(line,"StartCharMetrics",1,true) then break end end @@ -1801,7 +1801,7 @@ local lastlookups, lastpattern = { }, "" -- local lookups = specifications -- if name then -- lookups = families[name] --- elseif not find(pattern,"=") then +-- elseif not find(pattern,"=",1,true) then -- lookups = families[pattern] -- end -- if trace_names then @@ -1810,7 +1810,7 @@ local lastlookups, lastpattern = { }, "" -- if lookups then -- for key, value in gmatch(pattern,"([^=,]+)=([^=,]+)") do -- local t, n = { }, 0 --- if find(value,"*") then +-- if find(value,"*",1,true) then -- value = topattern(value) -- for i=1,#lookups do -- local s = lookups[i] @@ -1843,7 +1843,7 @@ local lastlookups, lastpattern = { }, "" local function look_them_up(lookups,specification) for key, value in next, specification do local t, n = { }, 0 - if find(value,"*") then + if find(value,"*",1,true) then value = topattern(value) for i=1,#lookups do local s = lookups[i] @@ -1906,7 +1906,7 @@ function names.lookup(pattern,name,reload) -- todo: find lastpattern = false lastlookups = lookups or { } elseif lastpattern ~= pattern then - local lookups = first_look(name or (not find(pattern,"=") and pattern),reload) + local lookups = first_look(name or (not find(pattern,"=",1,true) and pattern),reload) if lookups then if trace_names then report_names("starting with %s lookups for %a",#lookups,pattern) diff --git a/tex/context/base/font-var.mkvi b/tex/context/base/font-var.mkvi index e50c2bad4..fb60b711c 100644 --- a/tex/context/base/font-var.mkvi +++ b/tex/context/base/font-var.mkvi @@ -50,4 +50,7 @@ \let\fontsize \defaultfontsize \let\fontface \!!zerocount +% we can use an indirect mapping for fontclasses (map string onto numbers) and indeed this +% is somewhat more efficient but also makes the code messy ... maybe some day ... + \protect \endinput diff --git a/tex/context/base/l-dir.lua b/tex/context/base/l-dir.lua index b658b7c75..257212060 100644 --- a/tex/context/base/l-dir.lua +++ b/tex/context/base/l-dir.lua @@ -27,7 +27,7 @@ local currentdir = lfs.currentdir local chdir = lfs.chdir local mkdir = lfs.mkdir -local onwindows = os.type == "windows" or find(os.getenv("PATH"),";") +local onwindows = os.type == "windows" or find(os.getenv("PATH"),";",1,true) -- in case we load outside luatex @@ -189,7 +189,7 @@ local function glob(str,t) local split = lpegmatch(pattern,str) -- we could use the file splitter if split then local root, path, base = split[1], split[2], split[3] - local recurse = find(base,"%*%*") + local recurse = find(base,"**",1,true) -- find(base,"%*%*") local start = root .. path local result = lpegmatch(filter,start .. base) globpattern(start,result,recurse,t) @@ -215,7 +215,7 @@ local function glob(str,t) local t = t or { } local action = action or function(name) t[#t+1] = name end local root, path, base = split[1], split[2], split[3] - local recurse = find(base,"%*%*") + local recurse = find(base,"**",1,true) -- find(base,"%*%*") local start = root .. path local result = lpegmatch(filter,start .. base) globpattern(start,result,recurse,action) @@ -296,7 +296,6 @@ if onwindows then str = "" for i=1,n do local s = select(i,...) - local s = select(i,...) if s == "" then -- skip elseif str == "" then diff --git a/tex/context/base/l-io.lua b/tex/context/base/l-io.lua index 52f166af9..020e811bf 100644 --- a/tex/context/base/l-io.lua +++ b/tex/context/base/l-io.lua @@ -12,7 +12,7 @@ local concat = table.concat local floor = math.floor local type = type -if string.find(os.getenv("PATH"),";") then +if string.find(os.getenv("PATH"),";",1,true) then io.fileseparator, io.pathseparator = "\\", ";" else io.fileseparator, io.pathseparator = "/" , ":" diff --git a/tex/context/base/l-lua.lua b/tex/context/base/l-lua.lua index 4a96b0b1d..9565f484a 100644 --- a/tex/context/base/l-lua.lua +++ b/tex/context/base/l-lua.lua @@ -6,6 +6,17 @@ if not modules then modules = { } end modules ['l-lua'] = { license = "see context related readme files" } +-- potential issues with 5.3: + +-- i'm not sure yet if the int/float change is good for luatex + +-- math.min +-- math.max +-- tostring +-- tonumber +-- utf.* +-- bit32 + -- compatibility hacksand helpers local major, minor = string.match(_VERSION,"^[^%d]+(%d+)%.(%d+).*$") diff --git a/tex/context/base/l-os.lua b/tex/context/base/l-os.lua index bfafa4f95..1dff79cd3 100644 --- a/tex/context/base/l-os.lua +++ b/tex/context/base/l-os.lua @@ -137,7 +137,7 @@ function os.resultof(command) end if not io.fileseparator then - if find(os.getenv("PATH"),";") then + if find(os.getenv("PATH"),";",1,true) then io.fileseparator, io.pathseparator, os.type = "\\", ";", os.type or "mswin" else io.fileseparator, io.pathseparator, os.type = "/" , ":", os.type or "unix" @@ -236,7 +236,7 @@ elseif os.type == "windows" then function resolvers.platform(t,k) local platform, architecture = "", os.getenv("PROCESSOR_ARCHITECTURE") or "" - if find(architecture,"AMD64") then + if find(architecture,"AMD64",1,true) then -- platform = "mswin-64" platform = "win64" else @@ -252,9 +252,9 @@ elseif name == "linux" then function resolvers.platform(t,k) -- we sometimes have HOSTTYPE set so let's check that first local platform, architecture = "", os.getenv("HOSTTYPE") or os.resultof("uname -m") or "" - if find(architecture,"x86_64") then + if find(architecture,"x86_64",1,true) then platform = "linux-64" - elseif find(architecture,"ppc") then + elseif find(architecture,"ppc",1,true) then platform = "linux-ppc" else platform = "linux" @@ -285,9 +285,9 @@ elseif name == "macosx" then if architecture == "" then -- print("\nI have no clue what kind of OSX you're running so let's assume an 32 bit intel.\n") platform = "osx-intel" - elseif find(architecture,"i386") then + elseif find(architecture,"i386",1,true) then platform = "osx-intel" - elseif find(architecture,"x86_64") then + elseif find(architecture,"x86_64",1,true) then platform = "osx-64" else platform = "osx-ppc" @@ -301,7 +301,7 @@ elseif name == "sunos" then function resolvers.platform(t,k) local platform, architecture = "", os.resultof("uname -m") or "" - if find(architecture,"sparc") then + if find(architecture,"sparc",1,true) then platform = "solaris-sparc" else -- if architecture == 'i86pc' platform = "solaris-intel" @@ -315,7 +315,7 @@ elseif name == "freebsd" then function resolvers.platform(t,k) local platform, architecture = "", os.resultof("uname -m") or "" - if find(architecture,"amd64") then + if find(architecture,"amd64",1,true) then platform = "freebsd-amd64" else platform = "freebsd" @@ -330,7 +330,7 @@ elseif name == "kfreebsd" then function resolvers.platform(t,k) -- we sometimes have HOSTTYPE set so let's check that first local platform, architecture = "", os.getenv("HOSTTYPE") or os.resultof("uname -m") or "" - if find(architecture,"x86_64") then + if find(architecture,"x86_64",1,true) then platform = "kfreebsd-amd64" else platform = "kfreebsd-i386" @@ -356,7 +356,7 @@ else end function resolvers.bits(t,k) - local bits = find(os.platform,"64") and 64 or 32 + local bits = find(os.platform,"64",1,true) and 64 or 32 os.bits = bits return bits end diff --git a/tex/context/base/l-table.lua b/tex/context/base/l-table.lua index c318c57bb..d231830ed 100644 --- a/tex/context/base/l-table.lua +++ b/tex/context/base/l-table.lua @@ -88,6 +88,38 @@ local function sortedkeys(tab) end end +local function sortedhashonly(tab) + if tab then + local srt, s = { }, 0 + for key,_ in next, tab do + if type(key) == "string" then + s = s + 1 + srt[s] = key + end + end + sort(srt) + return srt + else + return { } + end +end + +local function sortedindexonly(tab) + if tab then + local srt, s = { }, 0 + for key,_ in next, tab do + if type(key) == "number" then + s = s + 1 + srt[s] = key + end + end + sort(srt) + return srt + else + return { } + end +end + local function sortedhashkeys(tab,cmp) -- fast one if tab then local srt, s = { }, 0 @@ -114,8 +146,10 @@ function table.allkeys(t) return sortedkeys(keys) end -table.sortedkeys = sortedkeys -table.sortedhashkeys = sortedhashkeys +table.sortedkeys = sortedkeys +table.sortedhashonly = sortedhashonly +table.sortedindexonly = sortedindexonly +table.sortedhashkeys = sortedhashkeys local function nothing() end diff --git a/tex/context/base/l-unicode.lua b/tex/context/base/l-unicode.lua index 6601a4c62..be61f3d73 100644 --- a/tex/context/base/l-unicode.lua +++ b/tex/context/base/l-unicode.lua @@ -6,7 +6,14 @@ if not modules then modules = { } end modules ['l-unicode'] = { license = "see context related readme files" } --- this module will be reorganized +-- in lua 5.3: + +-- utf8.char(···) : concatinated +-- utf8.charpatt : "[\0-\x7F\xC2-\xF4][\x80-\xBF]*" +-- utf8.codes(s) : for p, c in utf8.codes(s) do body end +-- utf8.codepoint(s [, i [, j]]) +-- utf8.len(s [, i]) +-- utf8.offset(s, n [, i]) -- todo: utf.sub replacement (used in syst-aux) -- we put these in the utf namespace: diff --git a/tex/context/base/lang-def.mkiv b/tex/context/base/lang-def.mkiv index 18f572039..5c1d6de9c 100644 --- a/tex/context/base/lang-def.mkiv +++ b/tex/context/base/lang-def.mkiv @@ -634,6 +634,10 @@ \c!rightquotation=\upperrightdoubleninequote, \c!date={\v!year,\space,\v!month,\space,\v!day}] +\installlanguage[\s!pt-br][\c!default=\s!pt] % Brazil +\installlanguage[\s!es-es][\c!default=\s!es] % Spain +\installlanguage[\s!es-la][\c!default=\s!es] % Latin America + \installlanguage [\s!ro] [\c!spacing=\v!packed, diff --git a/tex/context/base/lang-ini.mkiv b/tex/context/base/lang-ini.mkiv index 17d00033b..4ed7839bd 100644 --- a/tex/context/base/lang-ini.mkiv +++ b/tex/context/base/lang-ini.mkiv @@ -479,7 +479,7 @@ \lang_basics_switch_asked} \unexpanded\def\language - {\doifnextoptionalelse\lang_basics_set_current\normallanguage} + {\doifnextoptionalcselse\lang_basics_set_current\normallanguage} \newcount\mainlanguagenumber @@ -505,7 +505,7 @@ \normallanguage\mainlanguagenumber \to \everybeforepagebody -%D New (see nomarking and nolist): +%D Used at all? \def\splitsequence#1#2% {\doifelse{#1}\v!no{#2}{\doifelse{#1}\v!yes{\languageparameter\c!limittext}{#1}}} diff --git a/tex/context/base/lang-lab.mkiv b/tex/context/base/lang-lab.mkiv index 14d9d8594..7dcaaecb4 100644 --- a/tex/context/base/lang-lab.mkiv +++ b/tex/context/base/lang-lab.mkiv @@ -180,7 +180,7 @@ \grabuntil{stop#1text}\lang_labels_text_prefix_start_indeed} \def\lang_labels_text_prefix_start_indeed#1% text (not special checking done here yet, only for long texts anyway) - {\expandafter\edef\csname\??label\currenttextprefixclass:\currenttextprefixtag:\currenttextprefixname\endcsname{{\ctxlua{context(string.strip(\!!bs#1\!!es))}}\empty}} + {\expandafter\edef\csname\??label\currenttextprefixclass:\currenttextprefixtag:\currenttextprefixname\endcsname{{\ctxcommand{strip(\!!bs#1\!!es)}}\empty}} \def\lang_labels_text_prefix_setup[#1][#2]% {\ifsecondargument diff --git a/tex/context/base/lang-rep.lua b/tex/context/base/lang-rep.lua index be74d597a..02eb59f48 100644 --- a/tex/context/base/lang-rep.lua +++ b/tex/context/base/lang-rep.lua @@ -90,7 +90,7 @@ local function add(root,word,replacement) -- for i=1,#newlist do -- newlist[i] = utfbyte(newlist[i]) -- end - local special = find(replacement,"{") + local special = find(replacement,"{",1,true) local newlist = lpegmatch(splitter,replacement) -- root[l].final = { diff --git a/tex/context/base/lpdf-ano.lua b/tex/context/base/lpdf-ano.lua index 3f0e718b3..827c43ec6 100644 --- a/tex/context/base/lpdf-ano.lua +++ b/tex/context/base/lpdf-ano.lua @@ -10,46 +10,66 @@ if not modules then modules = { } end modules ['lpdf-ano'] = { -- todo: /AA << WC << ... >> >> : WillClose actions etc -local next, tostring = next, tostring -local rep, format = string.rep, string.format +-- internal references are indicated by a number (and turned into <autoprefix><number>) +-- we only flush internal destinations that are referred + +local next, tostring, tonumber, rawget = next, tostring, tonumber, rawget +local rep, format, find = string.rep, string.format, string.find +local min = math.min local lpegmatch = lpeg.match local formatters = string.formatters local backends, lpdf = backends, lpdf -local trace_references = false trackers.register("references.references", function(v) trace_references = v end) -local trace_destinations = false trackers.register("references.destinations", function(v) trace_destinations = v end) -local trace_bookmarks = false trackers.register("references.bookmarks", function(v) trace_bookmarks = v end) +local trace_references = false trackers.register("references.references", function(v) trace_references = v end) +local trace_destinations = false trackers.register("references.destinations", function(v) trace_destinations = v end) +local trace_bookmarks = false trackers.register("references.bookmarks", function(v) trace_bookmarks = v end) + +local log_destinations = false directives.register("destinations.log", function(v) log_destinations = v end) -local report_reference = logs.reporter("backend","references") -local report_destination = logs.reporter("backend","destinations") -local report_bookmark = logs.reporter("backend","bookmarks") +local report_reference = logs.reporter("backend","references") +local report_destination = logs.reporter("backend","destinations") +local report_bookmark = logs.reporter("backend","bookmarks") local variables = interfaces.variables -local constants = interfaces.constants +local v_auto = variables.auto +local v_page = variables.page + +local factor = number.dimenfactors.bp local settings_to_array = utilities.parsers.settings_to_array +local allocate = utilities.storage.allocate +local setmetatableindex = table.setmetatableindex + local nodeinjections = backends.pdf.nodeinjections local codeinjections = backends.pdf.codeinjections local registrations = backends.pdf.registrations +local getpos = codeinjections.getpos +local gethpos = codeinjections.gethpos +local getvpos = codeinjections.getvpos + local javascriptcode = interactions.javascripts.code local references = structures.references local bookmarks = structures.bookmarks +local flaginternals = references.flaginternals +local usedinternals = references.usedinternals +local usedviews = references.usedviews + local runners = references.runners local specials = references.specials local handlers = references.handlers local executers = references.executers -local getinnermethod = references.getinnermethod local nodepool = nodes.pool -local pdfannotation_node = nodepool.pdfannotation -local pdfdestination_node = nodepool.pdfdestination -local latelua_node = nodepool.latelua +----- pdfannotation_node = nodepool.pdfannotation +----- pdfdestination_node = nodepool.pdfdestination +----- latelua_node = nodepool.latelua +local latelua_function_node = nodepool.lateluafunction -- still node ... todo local texgetcount = tex.getcount @@ -63,7 +83,12 @@ local pdfshareobjectreference = lpdf.shareobjectreference local pdfreserveobject = lpdf.reserveobject local pdfpagereference = lpdf.pagereference local pdfdelayedobject = lpdf.delayedobject -local pdfregisterannotation = lpdf.registerannotation +local pdfregisterannotation = lpdf.registerannotation -- forward definition (for the moment) +local pdfnull = lpdf.null +local pdfaddtocatalog = lpdf.addtocatalog +local pdfaddtonames = lpdf.addtonames +local pdfaddtopageattributes = lpdf.addtopageattributes +local pdfrectangle = lpdf.rectangle -- todo: 3dview @@ -79,102 +104,417 @@ local pdf_t = pdfconstant("T") local pdf_fit = pdfconstant("Fit") local pdf_named = pdfconstant("Named") -local pdf_border = pdfarray { 0, 0, 0 } +local autoprefix = "#" -local cache = { } +-- Bah, I hate this kind of features .. anyway, as we have delayed resolving we +-- only support a document-wide setup and it has to be set before the first one +-- is used. Also, we default to a non-intrusive gray and the outline is kept +-- thin without dashing lines. This is as far as I'm prepared to go. This way +-- it can also be used as a debug feature. -local function pagedestination(n) -- only cache fit - if n > 0 then - local pd = cache[n] - if not pd then - local a = pdfarray { - pdfreference(pdfpagereference(n)), - pdf_fit, - } - pd = pdfshareobjectreference(a) - cache[n] = pd +local pdf_border_style = pdfarray { 0, 0, 0 } -- radius radius linewidth +local pdf_border_color = nil +local set_border = false + +function pdfborder() + border_set = true + return pdf_border_style, pdf_border_color +end + +lpdf.border = pdfborder + +directives.register("references.border",function(v) + if v and not set_border then + if type(v) == "string" then + local m = attributes.list[attributes.private('color')] or { } + local c = m and m[v] + local v = c and attributes.colors.value(c) + if v then + local r, g, b = v[3], v[4], v[5] + -- if r == g and g == b then + -- pdf_border_color = pdfarray { r } -- reduced, not not ... bugged viewers + -- else + pdf_border_color = pdfarray { r, g, b } -- always rgb + -- end + end end - return pd + if not pdf_border_color then + pdf_border_color = pdfarray { .6, .6, .6 } -- no reduce to { 0.6 } as there are buggy viewers out there + end + pdf_border_style = pdfarray { 0, 0, .5 } -- < 0.5 is not show by acrobat (at least not in my version) end -end +end) + +-- the used and flag code here is somewhat messy in the sense +-- that it belongs in strc-ref but at the same time depends on +-- the backend so we keep it here + +-- the caching is somewhat memory intense on the one hand but +-- it saves many small temporary tables so it might pay off + +local pagedestinations = allocate() +local pagereferences = allocate() -- annots are cached themselves -lpdf.pagedestination = pagedestination +setmetatableindex(pagedestinations, function(t,k) + k = tonumber(k) + local v = rawget(t,k) + if v then + -- report_reference("page number expected, got %s: %a",type(k),k) + return v + end + local v = k > 0 and pdfarray { + pdfreference(pdfpagereference(k)), + pdf_fit, + } or pdfnull() + t[k] = v + return v +end) + +setmetatableindex(pagereferences,function(t,k) + k = tonumber(k) + local v = rawget(t,k) + if v then + return v + end + local v = pdfdictionary { -- can be cached + S = pdf_goto, + D = pagedestinations[k], + } + t[k] = v + return v +end) + +lpdf.pagereferences = pagereferences -- table +lpdf.pagedestinations = pagedestinations -- table local defaultdestination = pdfarray { 0, pdf_fit } -local function link(url,filename,destination,page,actions) - if filename and filename ~= "" then - if file.basename(filename) == tex.jobname then - return false - else - filename = file.addsuffix(filename,"pdf") +-- fit is default (see lpdf-nod) + +local destinations = { } -- to be used soon + +local function pdfregisterdestination(name,reference) + local d = destinations[name] + if d then + report_destination("ignoring duplicate destination %a with reference %a",name,reference) + else + destinations[name] = reference + end +end + +lpdf.registerdestination = pdfregisterdestination + +local maxslice = 32 -- could be made configureable ... 64 is also ok + +luatex.registerstopactions(function() + if log_destinations and next(destinations) then + local logsnewline = logs.newline + local log_destinations = logs.reporter("system","references") + local log_destination = logs.reporter("destination") + logs.pushtarget("logfile") + logsnewline() + log_destinations("start used destinations") + logsnewline() + local n = 0 + for destination, pagenumber in table.sortedhash(destinations) do + log_destination("% 4i : %-5s : %s",pagenumber,usedviews[destination] or defaultview,destination) + n = n + 1 + end + logsnewline() + log_destinations("stop used destinations") + logsnewline() + logs.poptarget() + report_destination("%s destinations saved in log file",n) + end +end) + + +local function pdfnametree(destinations) + local slices = { } + local sorted = table.sortedkeys(destinations) + local size = #sorted + + if size <= 1.5*maxslice then + maxslice = size + end + + for i=1,size,maxslice do + local amount = min(i+maxslice-1,size) + local names = pdfarray { } + for j=i,amount do + local destination = sorted[j] + local pagenumber = destinations[destination] + names[#names+1] = destination + names[#names+1] = pdfreference(pagenumber) + end + local first = sorted[i] + local last = sorted[amount] + local limits = pdfarray { + first, + last, + } + local d = pdfdictionary { + Names = names, + Limits = limits, + } + slices[#slices+1] = { + reference = pdfreference(pdfflushobject(d)), + limits = limits, + } + end + local function collectkids(slices,first,last) + local k = pdfarray() + local d = pdfdictionary { + Kids = k, + Limits = pdfarray { + slices[first].limits[1], + slices[last ].limits[2], + }, + } + for i=first,last do + k[#k+1] = slices[i].reference end + return d end - if url and url ~= "" then - if filename and filename ~= "" then - if destination and destination ~= "" then - url = file.join(url,filename).."#"..destination + if #slices == 1 then + return slices[1].reference + else + while true do + if #slices > maxslice then + local temp = { } + local size = #slices + for i=1,size,maxslice do + local kids = collectkids(slices,i,min(i+maxslice-1,size)) + temp[#temp+1] = { + reference = pdfreference(pdfflushobject(kids)), + limits = kids.Limits, + } + end + slices = temp else - url = file.join(url,filename) + return pdfreference(pdfflushobject(collectkids(slices,1,#slices))) end end - return pdfdictionary { - S = pdf_uri, - URI = url, - } - elseif filename and filename ~= "" then - -- no page ? - if destination == "" then + end +end + +local function pdfdestinationspecification() + if next(destinations) then -- safeguard + local r = pdfnametree(destinations) + -- pdfaddtocatalog("Dests",r) + pdfaddtonames("Dests",r) + if not log_destinations then + destinations = nil + end + end +end + +lpdf.nametree = pdfnametree +lpdf.destinationspecification = pdfdestinationspecification + +lpdf.registerdocumentfinalizer(pdfdestinationspecification,"collect destinations") + +-- todo + +local destinations = { } + +local f_xyz = formatters["<< /D [ %i 0 R /XYZ %0.3F %0.3F null ] >>"] +local f_fit = formatters["<< /D [ %i 0 R /Fit ] >>"] +local f_fitb = formatters["<< /D [ %i 0 R /FitB ] >>"] +local f_fith = formatters["<< /D [ %i 0 R /FitH %0.3F ] >>"] +local f_fitv = formatters["<< /D [ %i 0 R /FitV %0.3F ] >>"] +local f_fitbh = formatters["<< /D [ %i 0 R /FitBH %0.3F ] >>"] +local f_fitbv = formatters["<< /D [ %i 0 R /FitBV %0.3F ] >>"] +local f_fitr = formatters["<< /D [ %i 0 R /FitR [ %0.3F %0.3F %0.3F %0.3F ] ] >>"] + +local v_standard = variables.standard +local v_frame = variables.frame +local v_width = variables.width +local v_minwidth = variables.minwidth +local v_height = variables.height +local v_minheight = variables.minheight +local v_fit = variables.fit +local v_tight = variables.tight + +-- nicer is to create dictionaries and set properties but it's a bit overkill + +local destinationactions = { + [v_standard] = function(r,w,h,d) return f_xyz (r,pdfrectangle(w,h,d)) end, -- local left,top with zoom (0 in our case) + [v_frame] = function(r,w,h,d) return f_fitr (r,pdfrectangle(w,h,d)) end, -- fit rectangle in window + [v_width] = function(r,w,h,d) return f_fith (r, gethpos() *factor) end, -- top coordinate, fit width of page in window + [v_minwidth] = function(r,w,h,d) return f_fitbh(r, gethpos() *factor) end, -- top coordinate, fit width of content in window + [v_height] = function(r,w,h,d) return f_fitv (r,(getvpos()+h)*factor) end, -- left coordinate, fit height of page in window + [v_minheight] = function(r,w,h,d) return f_fitbv(r,(getvpos()+h)*factor) end, -- left coordinate, fit height of content in window + [v_fit] = f_fit, -- fit page in window + [v_tight] = f_fitb, -- fit content in window +} + +local mapping = { + [v_standard] = v_standard, xyz = v_standard, + [v_frame] = v_frame, fitr = v_frame, + [v_width] = v_width, fith = v_width, + [v_minwidth] = v_minwidth, fitbh = v_minwidth, + [v_height] = v_height, fitv = v_height, + [v_minheight] = v_minheight, fitbv = v_minheight, + [v_fit] = v_fit, fit = v_fit, + [v_tight] = v_tight, fitb = v_tight, +} + +local defaultview = v_fit +local defaultaction = destinationactions[defaultview] + +-- A complication is that we need to use named destinations when we have views so we +-- end up with a mix. A previous versions just output multiple destinations but not +-- that we noved all to here we can be more sparse. + +local pagedestinations = { } + +table.setmetatableindex(pagedestinations,function(t,k) + local v = pdfdelayedobject(f_fit(k)) + t[k] = v + return v +end) + +local function flushdestination(width,height,depth,names,view) + local r = pdfpagereference(texgetcount("realpageno")) + if view == defaultview then + r = pagedestinations[r] + else + local action = view and destinationactions[view] or defaultaction + r = pdfdelayedobject(action(r,width,height,depth)) + end + for n=1,#names do + local name = names[n] + if name then + pdfregisterdestination(name,r) + end + end +end + +function nodeinjections.destination(width,height,depth,names,view) + -- todo check if begin end node / was comment + view = view and mapping[view] or defaultview + if trace_destinations then + report_destination("width %p, height %p, depth %p, names %|t, view %a",width,height,depth,names,view) + end + local method = references.innermethod + local noview = view == defaultview + local doview = false + -- we could save some aut's by using a name when given but it doesn't pay off apart + -- from making the code messy and tracing hard .. we only save some destinations + -- which we already share anyway + for n=1,#names do + local name = names[n] + if usedviews[name] then + -- already done, maybe a warning + elseif type(name) == "number" then + if noview then + usedviews[name] = view + names[n] = false + elseif method == v_page then + usedviews[name] = view + names[n] = false + else + local used = usedinternals[name] + if used and used ~= defaultview then + usedviews[name] = view + names[n] = autoprefix .. name + doview = true + end + end + elseif method == v_page then + usedviews[name] = view + else + usedviews[name] = view + doview = true + end + end + if doview then + return latelua_function_node(function() flushdestination(width,height,depth,names,view) end) + end +end + +-- we could share dictionaries ... todo + +local function somedestination(destination,internal,page) -- no view anyway + if references.innermethod ~= v_page then + if type(destination) == "number" then + if not internal then + internal = destination + end destination = nil end - if not destination and page then - destination = pdfarray { page - 1, pdf_fit } + if internal then + flaginternals[internal] = true -- for bookmarks and so + local used = usedinternals[internal] + if used == defaultview or used == true then + return pagereferences[page] + end + if type(destination) ~= "string" then + destination = autoprefix .. internal + end + return pdfdictionary { + S = pdf_goto, + D = destination, + } end - return pdfdictionary { - S = pdf_gotor, -- can also be pdf_launch - F = filename, - D = destination or defaultdestination, -- D is mandate - NewWindow = (actions.newwindow and true) or nil, - } - elseif destination and destination ~= "" then - return pdfdictionary { -- can be cached - S = pdf_goto, - D = destination, - } - else - local p = tonumber(page) - if p and p > 0 then - return pdfdictionary { -- can be cached + if destination then + -- hopefully this one is flushed + return pdfdictionary { S = pdf_goto, - D = pdfarray { - pdfreference(pdfpagereference(p)), - pdf_fit, - } + D = destination, } - elseif trace_references then - report_reference("invalid page reference %a",page) end end - return false + return pagereferences[page] end -lpdf.link = link +-- annotations -function lpdf.launch(program,parameters) - if program and program ~= "" then - local d = pdfdictionary { - S = pdf_launch, - F = program, - D = ".", - } - if parameters and parameters ~= "" then - d.P = parameters - end - return d +local pdflink = somedestination + +local function pdffilelink(filename,destination,page,actions) + if not filename or filename == "" or file.basename(filename) == tex.jobname then + return false + end + filename = file.addsuffix(filename,"pdf") + if not destination or destination == "" then + destination = pdfarray { (page or 0) - 1, pdf_fit } end + return pdfdictionary { + S = pdf_gotor, -- can also be pdf_launch + F = filename, + D = destination or defaultdestination, -- D is mandate + NewWindow = actions.newwindow and true or nil, + } end -function lpdf.javascript(name,arguments) +local function pdfurllink(url,destination,page) + if not url or url == "" then + return false + end + if destination and destination ~= "" then + url = url .. "#" .. destination + end + return pdfdictionary { + S = pdf_uri, + URI = url, + } +end + +local function pdflaunch(program,parameters) + if not program or program == "" then + return false + end + return pdfdictionary { + S = pdf_launch, + F = program, + D = ".", + P = parameters ~= "" and parameters or nil + } +end + +local function pdfjavascript(name,arguments) local script = javascriptcode(name,arguments) -- make into object (hash) if script then return pdfdictionary { @@ -219,9 +559,11 @@ function codeinjections.prerollreference(actions) -- share can become option if actions then local main, n = pdfaction(actions) if main then - main = pdfdictionary { + local bs, bc = pdfborder() + main = pdfdictionary { Subtype = pdf_link, - Border = pdf_border, + Border = bs, + C = bc, H = (not actions.highlight and pdf_n) or nil, A = pdfshareobjectreference(main), F = 4, -- print (mandate in pdf/a) @@ -231,157 +573,146 @@ function codeinjections.prerollreference(actions) -- share can become option end end -local function use_normal_annotations() - - local function reference(width,height,depth,prerolled) -- keep this one - if prerolled then - if trace_references then - report_reference("width %p, height %p, depth %p, prerolled %a",width,height,depth,prerolled) - end - return pdfannotation_node(width,height,depth,prerolled) - end - end - - local function finishreference() - end - - return reference, finishreference - -end +-- local function use_normal_annotations() +-- +-- local function reference(width,height,depth,prerolled) -- keep this one +-- if prerolled then +-- if trace_references then +-- report_reference("width %p, height %p, depth %p, prerolled %a",width,height,depth,prerolled) +-- end +-- return pdfannotation_node(width,height,depth,prerolled) +-- end +-- end +-- +-- local function finishreference() +-- end +-- +-- return reference, finishreference +-- +-- end -- eventually we can do this for special refs only -local hashed, nofunique, nofused = { }, 0, 0 +local hashed = { } +local nofunique = 0 +local nofused = 0 +local nofspecial = 0 +local share = true -local f_annot = formatters["<< /Type /Annot %s /Rect [%0.3f %0.3f %0.3f %0.3f] >>"] -local f_bpnf = formatters["_bpnf_(%s,%s,%s,'%s')"] +local f_annot = formatters["<< /Type /Annot %s /Rect [ %0.3F %0.3F %0.3F %0.3F ] >>"] -local function use_shared_annotations() +directives.register("refences.sharelinks", function(v) share = v end) - local factor = number.dimenfactors.bp - - local function finishreference(width,height,depth,prerolled) -- %0.2f looks okay enough (no scaling anyway) - local h, v = pdf.h, pdf.v - local llx, lly = h*factor, (v - depth)*factor - local urx, ury = (h + width)*factor, (v + height)*factor - local annot = f_annot(prerolled,llx,lly,urx,ury) - local n = hashed[annot] - if not n then - n = pdfdelayedobject(annot) - hashed[annot] = n - nofunique = nofunique + 1 - end - nofused = nofused + 1 - pdfregisterannotation(n) +table.setmetatableindex(hashed,function(t,k) + local v = pdfdelayedobject(k) + if share then + t[k] = v end + nofunique = nofunique + 1 + return v +end) + +local function finishreference(width,height,depth,prerolled) -- %0.2f looks okay enough (no scaling anyway) + local annot = hashed[f_annot(prerolled,pdfrectangle(width,height,depth))] + nofused = nofused + 1 + return pdfregisterannotation(annot) +end - _bpnf_ = finishreference - - local function reference(width,height,depth,prerolled) - if prerolled then - if trace_references then - report_reference("width %p, height %p, depth %p, prerolled %a",width,height,depth,prerolled) - end - local luacode = f_bpnf(width,height,depth,prerolled) - return latelua_node(luacode) - end +local function finishannotation(width,height,depth,prerolled,r) + local annot = f_annot(prerolled,pdfrectangle(width,height,depth)) + if r then + pdfdelayedobject(annot,r) + else + r = pdfdelayedobject(annot) end + nofspecial = nofspecial + 1 + return pdfregisterannotation(r) +end - statistics.register("pdf annotations", function() - if nofused > 0 then - return format("%s embedded, %s unique",nofused,nofunique) - else - return nil +function nodeinjections.reference(width,height,depth,prerolled) + if prerolled then + if trace_references then + report_reference("link: width %p, height %p, depth %p, prerolled %a",width,height,depth,prerolled) end - end) - - - return reference, finishreference - + return latelua_function_node(function() finishreference(width,height,depth,prerolled) end) + end end -local lln = latelua_node() if node.has_field(lln,'string') then - - directives.register("refences.sharelinks", function(v) - if v then - nodeinjections.reference, codeinjections.finishreference = use_shared_annotations() - else - nodeinjections.reference, codeinjections.finishreference = use_normal_annotations() +function nodeinjections.annotation(width,height,depth,prerolled,r) + if prerolled then + if trace_references then + report_reference("special: width %p, height %p, depth %p, prerolled %a",width,height,depth,prerolled) end - end) + return latelua_function_node(function() finishannotation(width,height,depth,prerolled,r or false) end) + end +end - nodeinjections.reference, codeinjections.finishreference = use_shared_annotations() +-- beware, we register during a latelua sweep so we have to make sure that +-- we finalize after that (also in a latelua for the moment as we have no +-- callback yet) -else +local annotations = nil - nodeinjections.reference, codeinjections.finishreference = use_normal_annotations() +function lpdf.registerannotation(n) + if annotations then + annotations[#annotations+1] = pdfreference(n) + else + annotations = pdfarray { pdfreference(n) } -- no need to use lpdf.array cum suis + end +end -end node.free(lln) +pdfregisterannotation = lpdf.registerannotation --- -- -- -- --- -- -- -- +function lpdf.annotationspecification() + if annotations then + local r = pdfdelayedobject(tostring(annotations)) -- delayed so okay in latelua + pdfaddtopageattributes("Annots",pdfreference(r)) + annotations = nil + end +end -local done = { } -- prevent messages +lpdf.registerpagefinalizer(lpdf.annotationspecification,"finalize annotations") -function nodeinjections.destination(width,height,depth,name,view) - if not done[name] then - done[name] = true - if trace_destinations then - report_destination("width %p, height %p, depth %p, name %a, view %a",width,height,depth,name,view) - end - return pdfdestination_node(width,height,depth,name,view) -- can be begin/end node +statistics.register("pdf annotations", function() + if nofused > 0 or nofspecial > 0 then + return format("%s links (%s unique), %s special",nofused,nofunique,nofspecial) + else + return nil end -end +end) -- runners and specials --- runners["inner"] = function(var,actions) --- if getinnermethod() == "names" then --- local vi = var.i --- if vi then --- local vir = vi.references --- if vir then --- local internal = vir.internal --- if internal then --- var.inner = "aut:" .. internal --- end --- end --- end --- else --- var.inner = nil --- end --- local prefix = var.p --- local inner = var.inner --- if inner and prefix and prefix ~= "" then --- inner = prefix .. ":" .. inner -- might not always be ok --- end --- return link(nil,nil,inner,var.r,actions) --- end - runners["inner"] = function(var,actions) local internal = false - if getinnermethod() == "names" then + local inner = nil + if references.innermethod == v_auto then local vi = var.i if vi then local vir = vi.references if vir then -- todo: no need for it when we have a real reference + local reference = vir.reference + if reference and reference ~= "" then + var.inner = reference + local prefix = var.p + if prefix and prefix ~= "" then + var.prefix = prefix + inner = prefix .. ":" .. reference + else + inner = reference + end + end internal = vir.internal if internal then - var.inner = "aut:" .. internal + flaginternals[internal] = true end end end else var.inner = nil end - local prefix = var.p - local inner = var.inner - if not internal and inner and prefix and prefix ~= "" then - -- no prefix with e.g. components - inner = prefix .. ":" .. inner - end - return link(nil,nil,inner,var.r,actions) + return pdflink(inner,internal,var.r) end runners["inner with arguments"] = function(var,actions) @@ -391,12 +722,15 @@ end runners["outer"] = function(var,actions) local file, url = references.checkedfileorurl(var.outer,var.outer) - return link(url,file,var.arguments,nil,actions) + if file then + return pdffilelink(file,var.arguments,nil,actions) + elseif url then + return pdfurllink(url,var.arguments,nil,actions) + end end runners["outer with inner"] = function(var,actions) - local file = references.checkedfile(var.outer) -- was var.f but fails ... why - return link(nil,file,var.inner,var.r,actions) + return pdffilelink(references.checkedfile(var.outer),var.inner,var.r,actions) end runners["special outer with operation"] = function(var,actions) @@ -443,12 +777,9 @@ function specials.internal(var,actions) -- better resolve in strc-ref if not v then -- error report_reference("no internal reference %a",i) - elseif getinnermethod() == "names" then - -- named - return link(nil,nil,"aut:"..i,v.references.realpage,actions) else - -- page - return link(nil,nil,nil,v.references.realpage,actions) + flaginternals[i] = true + return pdflink(nil,i,v.references.realpage) end end @@ -461,8 +792,7 @@ local pages = references.pages function specials.page(var,actions) local file = var.f if file then - file = references.checkedfile(file) - return link(nil,file,nil,var.operation,actions) + return pdffilelink(references.checkedfile(file),nil,var.operation,actions) else local p = var.r if not p then -- todo: call special from reference code @@ -472,29 +802,24 @@ function specials.page(var,actions) else p = references.realpageofpage(tonumber(p)) end - -- if p then - -- var.r = p - -- end end - return link(nil,nil,nil,p or var.operation,actions) + return pdflink(nil,nil,p or var.operation) end end function specials.realpage(var,actions) local file = var.f if file then - file = references.checkedfile(file) - return link(nil,file,nil,var.operation,actions) + return pdffilelink(references.checkedfile(file),nil,var.operation,actions) else - return link(nil,nil,nil,var.operation,actions) + return pdflink(nil,nil,var.operation) end end function specials.userpage(var,actions) local file = var.f if file then - file = references.checkedfile(file) - return link(nil,file,nil,var.operation,actions) + return pdffilelink(references.checkedfile(file),nil,var.operation,actions) else local p = var.r if not p then -- todo: call special from reference code @@ -506,7 +831,7 @@ function specials.userpage(var,actions) -- var.r = p -- end end - return link(nil,nil,nil,p or var.operation,actions) + return pdflink(nil,nil,p or var.operation) end end @@ -514,7 +839,7 @@ function specials.deltapage(var,actions) local p = tonumber(var.operation) if p then p = references.checkedrealpage(p + texgetcount("realpageno")) - return link(nil,nil,nil,p,actions) + return pdflink(nil,nil,p) end end @@ -554,27 +879,29 @@ function specials.order(var,actions) -- references.specials ! end function specials.url(var,actions) - local url = references.checkedurl(var.operation) - return link(url,nil,var.arguments,nil,actions) + return pdfurllink(references.checkedurl(var.operation),var.arguments,nil,actions) end function specials.file(var,actions) - local file = references.checkedfile(var.operation) - return link(nil,file,var.arguments,nil,actions) + return pdffilelink(references.checkedfile(var.operation),var.arguments,nil,actions) end function specials.fileorurl(var,actions) local file, url = references.checkedfileorurl(var.operation,var.operation) - return link(url,file,var.arguments,nil,actions) + if file then + return pdffilelink(file,var.arguments,nil,actions) + elseif url then + return pdfurllink(url,var.arguments,nil,actions) + end end function specials.program(var,content) local program = references.checkedprogram(var.operation) - return lpdf.launch(program,var.arguments) + return pdflaunch(program,var.arguments) end function specials.javascript(var) - return lpdf.javascript(var.operation,var.arguments) + return pdfjavascript(var.operation,var.arguments) end specials.JS = specials.javascript @@ -698,11 +1025,6 @@ function specials.action(var) end end ---~ entry.A = pdfdictionary { ---~ S = pdf_goto, ---~ D = .... ---~ } - local function build(levels,start,parent,method) local startlevel = levels[start][1] local i, n = start, 0 @@ -727,12 +1049,9 @@ local function build(levels,start,parent,method) Title = pdfunicode(title), Parent = parent, Prev = prev and pdfreference(prev), + A = somedestination(reference.internal,reference.internal,reference.realpage), } - if method == "internal" then - entry.Dest = "aut:" .. reference.internal - else -- if method == "page" then - entry.Dest = pagedestination(reference.realpage) - end + -- entry.Dest = somedestination(reference.internal,reference.internal,reference.realpage) if not first then first, last = child, child end prev = child last = prev @@ -771,10 +1090,10 @@ function codeinjections.addbookmarks(levels,method) Count = m, } pdfflushobject(parent,dict) - lpdf.addtocatalog("Outlines",lpdf.reference(parent)) + pdfaddtocatalog("Outlines",lpdf.reference(parent)) end end -- this could also be hooked into the frontend finalizer -lpdf.registerdocumentfinalizer(function() bookmarks.place() end,1,"bookmarks") +lpdf.registerdocumentfinalizer(function() bookmarks.place() end,1,"bookmarks") -- hm, why indirect call diff --git a/tex/context/base/lpdf-col.lua b/tex/context/base/lpdf-col.lua index b358d0820..9e483f9b5 100644 --- a/tex/context/base/lpdf-col.lua +++ b/tex/context/base/lpdf-col.lua @@ -14,42 +14,49 @@ local formatters = string.formatters local backends, lpdf, nodes = backends, lpdf, nodes -local allocate = utilities.storage.allocate -local formatters = string.formatters - -local nodeinjections = backends.pdf.nodeinjections -local codeinjections = backends.pdf.codeinjections -local registrations = backends.pdf.registrations - -local nodepool = nodes.pool -local register = nodepool.register -local pdfliteral = nodepool.pdfliteral - -local pdfconstant = lpdf.constant -local pdfstring = lpdf.string -local pdfdictionary = lpdf.dictionary -local pdfarray = lpdf.array -local pdfreference = lpdf.reference -local pdfverbose = lpdf.verbose -local pdfflushobject = lpdf.flushobject -local pdfflushstreamobject = lpdf.flushstreamobject - -local colors = attributes.colors -local transparencies = attributes.transparencies -local registertransparancy = transparencies.register -local registercolor = colors.register -local colorsvalue = colors.value -local transparenciesvalue = transparencies.value -local forcedmodel = colors.forcedmodel - -local c_transparency = pdfconstant("Transparency") - -local f_gray = formatters["%.3f g %.3f G"] -local f_rgb = formatters["%.3f %.3f %.3f rg %.3f %.3f %.3f RG"] -local f_cmyk = formatters["%.3f %.3f %.3f %.3f k %.3f %.3f %.3f %.3f K"] +local allocate = utilities.storage.allocate +local formatters = string.formatters + +local nodeinjections = backends.pdf.nodeinjections +local codeinjections = backends.pdf.codeinjections +local registrations = backends.pdf.registrations + +local nodepool = nodes.pool +local register = nodepool.register +local pdfliteral = nodepool.pdfliteral + +local pdfconstant = lpdf.constant +local pdfstring = lpdf.string +local pdfdictionary = lpdf.dictionary +local pdfarray = lpdf.array +local pdfreference = lpdf.reference +local pdfverbose = lpdf.verbose +local pdfflushobject = lpdf.flushobject +local pdfdelayedobject = lpdf.delayedobject +local pdfflushstreamobject = lpdf.flushstreamobject + +local pdfshareobjectreference = lpdf.shareobjectreference + +local addtopageattributes = lpdf.addtopageattributes +local adddocumentcolorspace = lpdf.adddocumentcolorspace +local adddocumentextgstate = lpdf.adddocumentextgstate + +local colors = attributes.colors +local transparencies = attributes.transparencies +local registertransparancy = transparencies.register +local registercolor = colors.register +local colorsvalue = colors.value +local transparenciesvalue = transparencies.value +local forcedmodel = colors.forcedmodel + +local c_transparency = pdfconstant("Transparency") + +local f_gray = formatters["%.3F g %.3F G"] +local f_rgb = formatters["%.3F %.3F %.3F rg %.3F %.3F %.3F RG"] +local f_cmyk = formatters["%.3F %.3F %.3F %.3F k %.3F %.3F %.3F %.3F K"] local f_spot = formatters["/%s cs /%s CS %s SCN %s scn"] local f_tr = formatters["Tr%s"] -local f_cm = formatters["q %f %f %f %f %f %f cm"] +local f_cm = formatters["q %F %F %F %F %F %F cm"] local f_effect = formatters["%s Tc %s w %s Tr"] local f_tr_gs = formatters["/Tr%s gs"] local f_num_1 = tostring @@ -76,11 +83,13 @@ lpdf.transparencygroups = transparencygroups table.setmetatableindex(transparencygroups, function(transparencygroups,colormodel) local cs = colorspaceconstants[colormodel] if cs then - local g = pdfreference(pdfflushobject(pdfdictionary { + local d = pdfdictionary { S = c_transparency, CS = cs, I = true, - })) + } + -- local g = pdfreference(pdfflushobject(tostring(d))) + local g = pdfreference(pdfdelayedobject(tostring(d))) transparencygroups[colormodel] = g return g else @@ -95,7 +104,7 @@ local function addpagegroup() if currentgroupcolormodel then local g = transparencygroups[currentgroupcolormodel] if g then - lpdf.addtopageattributes("Group",g) + addtopageattributes("Group",g) end end end @@ -224,7 +233,7 @@ local function registersomespotcolor(name,noffractions,names,p,colorspace,range, local mr = pdfreference(m) spotcolorhash[name] = m documentcolorspaces[name] = mr - lpdf.adddocumentcolorspace(name,mr) + adddocumentcolorspace(name,mr) else local cnames = pdfarray() local domain = pdfarray() @@ -280,13 +289,13 @@ local function registersomespotcolor(name,noffractions,names,p,colorspace,range, cnames, colorspace, pdfreference(calculation), - lpdf.shareobjectreference(tostring(channels)), -- optional but needed for shades + pdfshareobjectreference(tostring(channels)), -- optional but needed for shades } local m = pdfflushobject(array) local mr = pdfreference(m) spotcolorhash[name] = m documentcolorspaces[name] = mr - lpdf.adddocumentcolorspace(name,mr) + adddocumentcolorspace(name,mr) end end @@ -336,7 +345,7 @@ local function registersomeindexcolor(name,noffractions,names,p,colorspace,range end vector = pdfverbose { "<", concat(vector, " "), ">" } local n = pdfflushobject(pdfarray{ pdf_indexed, a, 255, vector }) - lpdf.adddocumentcolorspace(format("%s_indexed",name),pdfreference(n)) + adddocumentcolorspace(format("%s_indexed",name),pdfreference(n)) return n end @@ -455,7 +464,7 @@ function registrations.transparency(n,a,t) local mr = pdfreference(m) transparencyhash[0] = m documenttransparencies[0] = mr - lpdf.adddocumentextgstate("Tr0",mr) + adddocumentextgstate("Tr0",mr) done = true end if n > 0 and not transparencyhash[n] then @@ -470,7 +479,7 @@ function registrations.transparency(n,a,t) local mr = pdfreference(m) transparencyhash[n] = m documenttransparencies[n] = mr - lpdf.adddocumentextgstate(f_tr(n),mr) + adddocumentextgstate(f_tr(n),mr) end end @@ -689,7 +698,7 @@ end -- this will move to lpdf-spe.lua -local f_slant = formatters["pdf: q 1 0 %f 1 0 0 cm"] +local f_slant = formatters["pdf: q 1 0 %F 1 0 0 cm"] backends.pdf.tables.vfspecials = allocate { -- todo: distinguish between glyph and rule color diff --git a/tex/context/base/lpdf-fld.lua b/tex/context/base/lpdf-fld.lua index a9b9fd72d..414562ad5 100644 --- a/tex/context/base/lpdf-fld.lua +++ b/tex/context/base/lpdf-fld.lua @@ -55,7 +55,8 @@ if not modules then modules = { } end modules ['lpdf-fld'] = { -- for printing especially when highlighting (those colorfull foregrounds) is -- on. -local gmatch, lower, format = string.gmatch, string.lower, string.format +local tostring, next = tostring, next +local gmatch, lower, format, formatters = string.gmatch, string.lower, string.format, string.formatters local lpegmatch = lpeg.match local utfchar = utf.char local bpfactor, todimen = number.dimenfactors.bp, string.todimen @@ -92,14 +93,13 @@ local pdfflushobject = lpdf.flushobject local pdfshareobjectreference = lpdf.shareobjectreference local pdfshareobject = lpdf.shareobject local pdfreserveobject = lpdf.reserveobject -local pdfreserveannotation = lpdf.reserveannotation local pdfaction = lpdf.action -local hpack_node = node.hpack - -local nodepool = nodes.pool +local pdfcolor = lpdf.color +local pdfcolorvalues = lpdf.colorvalues +local pdflayerreference = lpdf.layerreference -local pdfannotation_node = nodepool.pdfannotation +local hpack_node = node.hpack local submitoutputformat = 0 -- 0=unknown 1=HTML 2=FDF 3=XML => not yet used, needs to be checked @@ -125,39 +125,39 @@ function codeinjections.setformsmethod(name) end local flag = { -- /Ff - ReadOnly = 1, -- 1 - Required = 2, -- 2 - NoExport = 4, -- 3 - MultiLine = 4096, -- 13 - Password = 8192, -- 14 - NoToggleToOff = 16384, -- 15 - Radio = 32768, -- 16 - PushButton = 65536, -- 17 - PopUp = 131072, -- 18 - Edit = 262144, -- 19 - Sort = 524288, -- 20 - FileSelect = 1048576, -- 21 - DoNotSpellCheck = 4194304, -- 23 - DoNotScroll = 8388608, -- 24 - Comb = 16777216, -- 25 - RichText = 33554432, -- 26 - RadiosInUnison = 33554432, -- 26 - CommitOnSelChange = 67108864, -- 27 + ReadOnly = 2^ 0, -- 1 + Required = 2^ 1, -- 2 + NoExport = 2^ 2, -- 3 + MultiLine = 2^12, -- 13 + Password = 2^13, -- 14 + NoToggleToOff = 2^14, -- 15 + Radio = 2^15, -- 16 + PushButton = 2^16, -- 17 + PopUp = 2^17, -- 18 + Edit = 2^18, -- 19 + Sort = 2^19, -- 20 + FileSelect = 2^20, -- 21 + DoNotSpellCheck = 2^22, -- 23 + DoNotScroll = 2^23, -- 24 + Comb = 2^24, -- 25 + RichText = 2^25, -- 26 + RadiosInUnison = 2^25, -- 26 + CommitOnSelChange = 2^26, -- 27 } local plus = { -- /F - Invisible = 1, -- 1 - Hidden = 2, -- 2 - Printable = 4, -- 3 - Print = 4, -- 3 - NoZoom = 8, -- 4 - NoRotate = 16, -- 5 - NoView = 32, -- 6 - ReadOnly = 64, -- 7 - Locked = 128, -- 8 - ToggleNoView = 256, -- 9 - LockedContents = 512, -- 10, - AutoView = 256, -- 288 (6+9) + Invisible = 2^0, -- 1 + Hidden = 2^1, -- 2 + Printable = 2^2, -- 3 + Print = 2^2, -- 3 + NoZoom = 2^3, -- 4 + NoRotate = 2^4, -- 5 + NoView = 2^5, -- 6 + ReadOnly = 2^6, -- 7 + Locked = 2^7, -- 8 + ToggleNoView = 2^8, -- 9 + LockedContents = 2^9, -- 10, + AutoView = 2^8, -- 6 + 9 ? } -- todo: check what is interfaced @@ -198,33 +198,82 @@ local function fieldplus(specification) -- /F return n end -local function checked(what) - local set, bug = references.identify("",what) - if not bug and #set > 0 then - local r, n = pdfaction(set) - return pdfshareobjectreference(r) - end -end +-- keep: +-- +-- local function checked(what) +-- local set, bug = references.identify("",what) +-- if not bug and #set > 0 then +-- local r, n = pdfaction(set) +-- return pdfshareobjectreference(r) +-- end +-- end +-- +-- local function fieldactions(specification) -- share actions +-- local d, a = { }, nil +-- a = specification.mousedown +-- or specification.clickin if a and a ~= "" then d.D = checked(a) end +-- a = specification.mouseup +-- or specification.clickout if a and a ~= "" then d.U = checked(a) end +-- a = specification.regionin if a and a ~= "" then d.E = checked(a) end -- Enter +-- a = specification.regionout if a and a ~= "" then d.X = checked(a) end -- eXit +-- a = specification.afterkey if a and a ~= "" then d.K = checked(a) end +-- a = specification.format if a and a ~= "" then d.F = checked(a) end +-- a = specification.validate if a and a ~= "" then d.V = checked(a) end +-- a = specification.calculate if a and a ~= "" then d.C = checked(a) end +-- a = specification.focusin if a and a ~= "" then d.Fo = checked(a) end +-- a = specification.focusout if a and a ~= "" then d.Bl = checked(a) end +-- a = specification.openpage if a and a ~= "" then d.PO = checked(a) end +-- a = specification.closepage if a and a ~= "" then d.PC = checked(a) end +-- -- a = specification.visiblepage if a and a ~= "" then d.PV = checked(a) end +-- -- a = specification.invisiblepage if a and a ~= "" then d.PI = checked(a) end +-- return next(d) and pdfdictionary(d) +-- end + +local mapping = { + mousedown = "D", clickin = "D", + mouseup = "U", clickout = "U", + regionin = "E", + regionout = "X", + afterkey = "K", + format = "F", + validate = "V", + calculate = "C", + focusin = "Fo", + focusout = "Bl", + openpage = "PO", + closepage = "PC", + -- visiblepage = "PV", + -- invisiblepage = "PI", +} local function fieldactions(specification) -- share actions - local d, a = { }, nil - a = specification.mousedown - or specification.clickin if a and a ~= "" then d.D = checked(a) end - a = specification.mouseup - or specification.clickout if a and a ~= "" then d.U = checked(a) end - a = specification.regionin if a and a ~= "" then d.E = checked(a) end -- Enter - a = specification.regionout if a and a ~= "" then d.X = checked(a) end -- eXit - a = specification.afterkey if a and a ~= "" then d.K = checked(a) end - a = specification.format if a and a ~= "" then d.F = checked(a) end - a = specification.validate if a and a ~= "" then d.V = checked(a) end - a = specification.calculate if a and a ~= "" then d.C = checked(a) end - a = specification.focusin if a and a ~= "" then d.Fo = checked(a) end - a = specification.focusout if a and a ~= "" then d.Bl = checked(a) end - a = specification.openpage if a and a ~= "" then d.PO = checked(a) end - a = specification.closepage if a and a ~= "" then d.PC = checked(a) end - -- a = specification.visiblepage if a and a ~= "" then d.PV = checked(a) end - -- a = specification.invisiblepage if a and a ~= "" then d.PI = checked(a) end - return next(d) and pdfdictionary(d) + local d = nil + for key, target in next, mapping do + local code = specification[key] + if code and code ~= "" then + -- local a = checked(code) + local set, bug = references.identify("",code) + if not bug and #set > 0 then + local a = pdfaction(set) -- r, n + if a then + local r = pdfshareobjectreference(a) + if d then + d[target] = r + else + d = pdfdictionary { [target] = r } + end + else + report_fields("invalid field action %a, case %s",code,2) + end + else + report_fields("invalid field action %a, case %s",code,1) + end + end + end + -- if d then + -- d = pdfshareobjectreference(d) -- not much overlap or maybe only some patterns + -- end + return d end -- fonts and color @@ -298,16 +347,16 @@ local function fieldsurrounding(specification) fontsize = todimen(fontsize) fontsize = fontsize and (bpfactor * fontsize) or 12 fontraise = 0.1 * fontsize -- todo: figure out what the natural one is and compensate for strutdp - local fontcode = format("%0.4f Tf %0.4f Ts",fontsize,fontraise) + local fontcode = formatters["%0.4f Tf %0.4f Ts"](fontsize,fontraise) -- we could test for colorvalue being 1 (black) and omit it then - local colorcode = lpdf.color(3,colorvalue) -- we force an rgb color space + local colorcode = pdfcolor(3,colorvalue) -- we force an rgb color space if trace_fields then report_fields("using font, style %a, alternative %a, size %p, tag %a, code %a",fontstyle,fontalternative,fontsize,tag,fontcode) report_fields("using color, value %a, code %a",colorvalue,colorcode) end local stream = pdfstream { pdfconstant(tag), - format("%s %s",fontcode,colorcode) + formatters["%s %s"](fontcode,colorcode) } usedfonts[tag] = a -- the name -- move up with "x.y Ts" @@ -570,17 +619,14 @@ local function todingbat(n) end end --- local zero_bc = pdfarray { 0, 0, 0 } --- local zero_bg = pdfarray { 1, 1, 1 } - local function fieldrendering(specification) local bvalue = tonumber(specification.backgroundcolorvalue) local fvalue = tonumber(specification.framecolorvalue) local svalue = specification.fontsymbol if bvalue or fvalue or (svalue and svalue ~= "") then return pdfdictionary { - BG = bvalue and pdfarray { lpdf.colorvalues(3,bvalue) } or nil, -- or zero_bg, - BC = fvalue and pdfarray { lpdf.colorvalues(3,fvalue) } or nil, -- or zero_bc, + BG = bvalue and pdfarray { pdfcolorvalues(3,bvalue) } or nil, -- or zero_bg, + BC = fvalue and pdfarray { pdfcolorvalues(3,fvalue) } or nil, -- or zero_bc, CA = svalue and pdfstring (svalue) or nil, } end @@ -590,7 +636,7 @@ end local function fieldlayer(specification) -- we can move this in line local layer = specification.layer - return (layer and lpdf.layerreference(layer)) or nil + return (layer and pdflayerreference(layer)) or nil end -- defining @@ -611,7 +657,7 @@ local xfdftemplate = [[ function codeinjections.exportformdata(name) local result = { } for k, v in table.sortedhash(fields) do - result[#result+1] = format(" <field name='%s'><value>%s</value></field>",v.name or k,v.default or "") + result[#result+1] = formatters[" <field name='%s'><value>%s</value></field>"](v.name or k,v.default or "") end local base = file.basename(tex.jobname) local xfdf = format(xfdftemplate,base,table.concat(result,"\n")) @@ -912,7 +958,7 @@ local function save_parent(field,specification,d,hasopt) end local function save_kid(field,specification,d,optname) - local kn = pdfreserveannotation() + local kn = pdfreserveobject() field.kids[#field.kids+1] = pdfreference(kn) if optname then local opt = field.opt @@ -921,7 +967,7 @@ local function save_kid(field,specification,d,optname) end end local width, height, depth = specification.width or 0, specification.height or 0, specification.depth - local box = hpack_node(pdfannotation_node(width,height,depth,d(),kn)) + local box = hpack_node(nodeinjections.annotation(width,height,depth,d(),kn)) box.width, box.height, box.depth = width, height, depth -- redundant return box end @@ -969,6 +1015,8 @@ local function makelinechild(name,specification) if trace_fields then report_fields("using child text %a",name) end + -- we could save a little by not setting some key/value when it's the + -- same as parent but it would cost more memory to keep track of it local d = pdfdictionary { Subtype = pdf_widget, Parent = pdfreference(parent.pobj), diff --git a/tex/context/base/lpdf-fmt.lua b/tex/context/base/lpdf-fmt.lua index b444f03c3..568b801b4 100644 --- a/tex/context/base/lpdf-fmt.lua +++ b/tex/context/base/lpdf-fmt.lua @@ -349,7 +349,7 @@ local filenames = { } local function locatefile(filename) - local fullname = resolvers.findfile(filename,"icc") + local fullname = resolvers.findfile(filename,"icc",1,true) if not fullname or fullname == "" then fullname = resolvers.finders.byscheme("loc",filename) -- could be specific to the project end @@ -743,7 +743,7 @@ end function codeinjections.supportedformats() local t = { } for k, v in table.sortedhash(formats) do - if find(k,"pdf") then + if find(k,"pdf",1,true) then t[#t+1] = k end end diff --git a/tex/context/base/lpdf-grp.lua b/tex/context/base/lpdf-grp.lua index fed5e6a46..befe52c76 100644 --- a/tex/context/base/lpdf-grp.lua +++ b/tex/context/base/lpdf-grp.lua @@ -236,7 +236,7 @@ function img.package(image) -- see lpdf-u3d ** local height = boundingbox[4] local xform = img.scan { attr = resources(), - stream = format("%f 0 0 %f 0 0 cm /%s Do",width,height,imagetag), + stream = format("%F 0 0 %F 0 0 cm /%s Do",width,height,imagetag), bbox = { 0, 0, width/factor, height/factor }, } img.immediatewrite(xform) diff --git a/tex/context/base/lpdf-ini.lua b/tex/context/base/lpdf-ini.lua index 23fe6c177..a89b8b8c5 100644 --- a/tex/context/base/lpdf-ini.lua +++ b/tex/context/base/lpdf-ini.lua @@ -9,35 +9,178 @@ if not modules then modules = { } end modules ['lpdf-ini'] = { local setmetatable, getmetatable, type, next, tostring, tonumber, rawset = setmetatable, getmetatable, type, next, tostring, tonumber, rawset local char, byte, format, gsub, concat, match, sub, gmatch = string.char, string.byte, string.format, string.gsub, table.concat, string.match, string.sub, string.gmatch local utfchar, utfvalues = utf.char, utf.values -local sind, cosd, floor = math.sind, math.cosd, math.floor +local sind, cosd, floor, max, min = math.sind, math.cosd, math.floor, math.max, math.min local lpegmatch, P, C, R, S, Cc, Cs = lpeg.match, lpeg.P, lpeg.C, lpeg.R, lpeg.S, lpeg.Cc, lpeg.Cs local formatters = string.formatters -local pdfreserveobject = pdf.reserveobj -local pdfimmediateobject = pdf.immediateobj -local pdfdeferredobject = pdf.obj -local pdfreferenceobject = pdf.refobj +local report_objects = logs.reporter("backend","objects") +local report_finalizing = logs.reporter("backend","finalizing") +local report_blocked = logs.reporter("backend","blocked") + +-- gethpos : used +-- getpos : used +-- getvpos : used +-- +-- getmatrix : used +-- hasmatrix : used +-- +-- mapfile : used in font-ctx.lua +-- mapline : used in font-ctx.lua +-- +-- maxobjnum : not used +-- obj : used +-- immediateobj : used +-- objtype : not used +-- pageref : used +-- print : can be used +-- refobj : used +-- registerannot : not to be used +-- reserveobj : used + +-- pdf.catalog : used +-- pdf.info : used +-- pdf.trailer : used +-- pdf.names : not to be used + +-- pdf.setinfo : used +-- pdf.setcatalog : used +-- pdf.setnames : not to be used +-- pdf.settrailer : used + +-- pdf.getinfo : used +-- pdf.getcatalog : used +-- pdf.getnames : not to be used +-- pdf.gettrailer : used + +local pdf = pdf +local factor = number.dimenfactors.bp + +if pdf.setinfo then +-- table.setmetatablenewindex(pdf,function(t,k,v) +-- report_blocked("'pdf.%s' is not supported",k) +-- end) + -- the getters are harmless +end + +if not pdf.setinfo then + function pdf.setinfo (s) pdf.info = s end + function pdf.setcatalog(s) pdf.catalog = s end + function pdf.setnames (s) pdf.names = s end + function pdf.settrailer(s) pdf.trailer = s end +end + +if not pdf.getpos then + function pdf.getpos () return pdf.h, pdf.v end + function pdf.gethpos () return pdf.h end + function pdf.getvpos () return pdf.v end + function pdf.hasmatrix() return false end + function pdf.getmatrix() return 1, 0, 0, 1, 0, 0 end +end + +if not pdf.setpageresources then + function pdf.setpageresources (s) pdf.pageresources = s end + function pdf.setpageattributes (s) pdf.pageattributes = s end + function pdf.setpagesattributes(s) pdf.pagesattributes = s end +end + +local pdfsetinfo = pdf.setinfo +local pdfsetcatalog = pdf.setcatalog +local pdfsetnames = pdf.setnames +local pdfsettrailer = pdf.settrailer + +local pdfsetpageresources = pdf.setpageresources +local pdfsetpageattributes = pdf.setpageattributes +local pdfsetpagesattributes = pdf.setpagesattributes + +local pdfgetpos = pdf.getpos +local pdfgethpos = pdf.gethpos +local pdfgetvpos = pdf.getvpos +local pdfgetmatrix = pdf.getmatrix +local pdfhasmatrix = pdf.hasmatrix + +local pdfreserveobject = pdf.reserveobj +local pdfimmediateobject = pdf.immediateobj +local pdfdeferredobject = pdf.obj +local pdfreferenceobject = pdf.refobj + +function pdf.setinfo () report_blocked("'pdf.%s' is not supported","setinfo") end -- use lpdf.addtoinfo etc +function pdf.setcatalog () report_blocked("'pdf.%s' is not supported","setcatalog") end +function pdf.setnames () report_blocked("'pdf.%s' is not supported","setnames") end +function pdf.settrailer () report_blocked("'pdf.%s' is not supported","settrailer") end +function pdf.setpageresources () report_blocked("'pdf.%s' is not supported","setpageresources") end +function pdf.setpageattributes () report_blocked("'pdf.%s' is not supported","setpageattributes") end +function pdf.setpagesattributes() report_blocked("'pdf.%s' is not supported","setpagesattributes") end + +function pdf.registerannot() report_blocked("'pdf.%s' is not supported","registerannot") end local trace_finalizers = false trackers.register("backend.finalizers", function(v) trace_finalizers = v end) local trace_resources = false trackers.register("backend.resources", function(v) trace_resources = v end) local trace_objects = false trackers.register("backend.objects", function(v) trace_objects = v end) local trace_detail = false trackers.register("backend.detail", function(v) trace_detail = v end) -local report_objects = logs.reporter("backend","objects") -local report_finalizing = logs.reporter("backend","finalizing") - -local backends = backends - -backends.pdf = backends.pdf or { +local backends = backends +local pdfbackend = { comment = "backend for directly generating pdf output", nodeinjections = { }, codeinjections = { }, registrations = { }, tables = { }, } +backends.pdf = pdfbackend +lpdf = lpdf or { } +local lpdf = lpdf + +local codeinjections = pdfbackend.codeinjections +local nodeinjections = pdfbackend.nodeinjections + +codeinjections.getpos = pdfgetpos lpdf.getpos = pdfgetpos +codeinjections.gethpos = pdfgethpos lpdf.gethpos = pdfgethpos +codeinjections.getvpos = pdfgetvpos lpdf.getvpos = pdfgetvpos +codeinjections.hasmatrix = pdfhasmatrix lpdf.hasmatrix = pdfhasmatrix +codeinjections.getmatrix = pdfgetmatrix lpdf.getmatrix = pdfgetmatrix + +function lpdf.transform(llx,lly,urx,ury) + if pdfhasmatrix() then + local sx, rx, ry, sy = pdfgetmatrix() + local w, h = urx - llx, ury - lly + return llx, lly, llx + sy*w - ry*h, lly + sx*h - rx*w + else + return llx, lly, urx, ury + end +end -lpdf = lpdf or { } -local lpdf = lpdf +-- function lpdf.rectangle(width,height,depth) +-- local h, v = pdfgetpos() +-- local llx, lly, urx, ury +-- if pdfhasmatrix() then +-- local sx, rx, ry, sy = pdfgetmatrix() +-- llx = 0 +-- lly = -depth +-- -- llx = ry * depth +-- -- lly = -sx * depth +-- urx = sy * width - ry * height +-- ury = sx * height - rx * width +-- else +-- llx = 0 +-- lly = -depth +-- urx = width +-- ury = height +-- return (h+llx)*factor, (v+lly)*factor, (h+urx)*factor, (v+ury)*factor +-- end +-- end + +function lpdf.rectangle(width,height,depth) + local h, v = pdfgetpos() + if pdfhasmatrix() then + local sx, rx, ry, sy = pdfgetmatrix() + -- return (h+ry*depth)*factor, (v-sx*depth)*factor, (h+sy*width-ry*height)*factor, (v+sx*height-rx*width)*factor + return h *factor, (v- depth)*factor, (h+sy*width-ry*height)*factor, (v+sx*height-rx*width)*factor + else + return h *factor, (v- depth)*factor, (h+ width )*factor, (v+ height )*factor + end +end + +-- local function tosixteen(str) -- an lpeg might be faster (no table) if not str or str == "" then @@ -91,13 +234,13 @@ end lpdf.toeight = toeight ---~ local escaped = lpeg.Cs((lpeg.S("\0\t\n\r\f ()[]{}/%")/function(s) return format("#%02X",byte(s)) end + lpeg.P(1))^0) - ---~ local function cleaned(str) ---~ return (str and str ~= "" and lpegmatch(escaped,str)) or "" ---~ end - ---~ lpdf.cleaned = cleaned -- not public yet +-- local escaped = lpeg.Cs((lpeg.S("\0\t\n\r\f ()[]{}/%")/function(s) return format("#%02X",byte(s)) end + lpeg.P(1))^0) +-- +-- local function cleaned(str) +-- return (str and str ~= "" and lpegmatch(escaped,str)) or "" +-- end +-- +-- lpdf.cleaned = cleaned -- not public yet local function merge_t(a,b) local t = { } @@ -112,16 +255,16 @@ local f_dictionary = formatters["<< % t >>"] local f_key_array = formatters["/%s [ % t ]"] local f_array = formatters["[ % t ]"] +-- local f_key_value = formatters["/%s %s"] +-- local f_key_dictionary = formatters["/%s <<% t>>"] +-- local f_dictionary = formatters["<<% t>>"] +-- local f_key_array = formatters["/%s [% t]"] +-- local f_array = formatters["[% t]"] + local tostring_a, tostring_d tostring_d = function(t,contentonly,key) - if not next(t) then - if contentonly then - return "" - else - return "<< >>" - end - else + if next(t) then local r, rn = { }, 0 for k, v in next, t do rn = rn + 1 @@ -150,18 +293,16 @@ tostring_d = function(t,contentonly,key) else return f_dictionary(r) end + elseif contentonly then + return "" + else + return "<< >>" end end tostring_a = function(t,contentonly,key) local tn = #t - if tn == 0 then - if contentonly then - return "" - else - return "[ ]" - end - else + if tn ~= 0 then local r = { } for k=1,tn do local v = t[k] @@ -191,10 +332,14 @@ tostring_a = function(t,contentonly,key) else return f_array(r) end + elseif contentonly then + return "" + else + return "[ ]" end end -local tostring_x = function(t) return concat(t, " ") end +local tostring_x = function(t) return concat(t," ") end local tostring_s = function(t) return toeight(t[1]) end local tostring_u = function(t) return tosixteen(t[1]) end local tostring_n = function(t) return tostring(t[1]) end -- tostring not needed @@ -207,7 +352,7 @@ local tostring_r = function(t) local n = t[1] return n and n > 0 and (n .. " 0 R local tostring_v = function(t) local s = t[1] if type(s) == "table" then - return concat(s,"") + return concat(s) else return s end @@ -325,12 +470,27 @@ local function pdfboolean(b,default) end end -local function pdfreference(r) - return setmetatable({ r or 0 },mt_r) +local r_zero = setmetatable({ 0 },mt_r) + +local function pdfreference(r) -- maybe make a weak table + if r and r ~= 0 then + return setmetatable({ r },mt_r) + else + return r_zero + end end +local v_zero = setmetatable({ 0 },mt_v) +local v_empty = setmetatable({ "" },mt_v) + local function pdfverbose(t) -- maybe check for type - return setmetatable({ t or "" },mt_v) + if t == 0 then + return v_zero + elseif t == "" then + return v_empty + else + return setmetatable({ t },mt_v) + end end lpdf.stream = pdfstream -- THIS WILL PROBABLY CHANGE @@ -345,37 +505,19 @@ lpdf.boolean = pdfboolean lpdf.reference = pdfreference lpdf.verbose = pdfverbose --- n = pdf.obj(n, str) --- n = pdf.obj(n, "file", filename) --- n = pdf.obj(n, "stream", streamtext, attrtext) --- n = pdf.obj(n, "streamfile", filename, attrtext) - --- we only use immediate objects - --- todo: tracing - local names, cache = { }, { } function lpdf.reserveobject(name) - if name == "annot" then - -- catch misuse - return pdfreserveobject("annot") - else - local r = pdfreserveobject() - if name then - names[name] = r - if trace_objects then - report_objects("reserving number %a under name %a",r,name) - end - elseif trace_objects then - report_objects("reserving number %a",r) + local r = pdfreserveobject() -- we don't support "annot" + if name then + names[name] = r + if trace_objects then + report_objects("reserving number %a under name %a",r,name) end - return r + elseif trace_objects then + report_objects("reserving number %a",r) end -end - -function lpdf.reserveannotation() - return pdfreserveobject("annot") + return r end -- lpdf.immediateobject = pdfimmediateobject @@ -383,11 +525,29 @@ end -- lpdf.object = pdfdeferredobject -- lpdf.referenceobject = pdfreferenceobject -lpdf.pagereference = pdf.pageref or tex.pdfpageref -lpdf.registerannotation = pdf.registerannot +local pagereference = pdf.pageref or tex.pdfpageref +local nofpages = 0 -function lpdf.delayedobject(data) -- we will get rid of this one - local n = pdfdeferredobject(data) +function lpdf.pagereference(n) + if nofpages == 0 then + nofpages = structures.pages.nofpages + if nofpages == 0 then + nofpages = 1 + end + end + if n > nofpages then + return pagereference(nofpages) -- or 1, could be configureable + else + return pagereference(n) + end +end + +function lpdf.delayedobject(data,n) + if n then + pdfdeferredobject(n,data) + else + n = pdfdeferredobject(data) + end pdfreferenceobject(n) return n end @@ -484,60 +644,10 @@ function lpdf.shareobjectreference(content) end end ---~ local d = lpdf.dictionary() ---~ local e = lpdf.dictionary { ["e"] = "abc", x = lpdf.dictionary { ["f"] = "ABC" } } ---~ local f = lpdf.dictionary { ["f"] = "ABC" } ---~ local a = lpdf.array { lpdf.array { lpdf.string("xxx") } } - ---~ print(a) ---~ os.exit() - ---~ d["test"] = lpdf.string ("test") ---~ d["more"] = "more" ---~ d["bool"] = true ---~ d["numb"] = 1234 ---~ d["oeps"] = lpdf.dictionary { ["hans"] = "ton" } ---~ d["whow"] = lpdf.array { lpdf.string("ton") } - ---~ a[#a+1] = lpdf.string("xxx") ---~ a[#a+1] = lpdf.string("yyy") - ---~ d.what = a - ---~ print(e) - ---~ local d = lpdf.dictionary() ---~ d["abcd"] = { 1, 2, 3, "test" } ---~ print(d) ---~ print(d()) - ---~ local d = lpdf.array() ---~ d[#d+1] = 1 ---~ d[#d+1] = 2 ---~ d[#d+1] = 3 ---~ d[#d+1] = "test" ---~ print(d) - ---~ local d = lpdf.array() ---~ d[#d+1] = { 1, 2, 3, "test" } ---~ print(d) - ---~ local d = lpdf.array() ---~ d[#d+1] = { a=1, b=2, c=3, d="test" } ---~ print(d) - ---~ local s = lpdf.constant("xx") ---~ print(s) -- fails somehow ---~ print(s()) -- fails somehow - ---~ local s = lpdf.boolean(false) ---~ s.value = true ---~ print(s) ---~ print(s()) - -- three priority levels, default=2 -local pagefinalizers, documentfinalizers = { { }, { }, { } }, { { }, { }, { } } +local pagefinalizers = { { }, { }, { } } +local documentfinalizers = { { }, { }, { } } local pageresources, pageattributes, pagesattributes @@ -550,9 +660,9 @@ end resetpageproperties() local function setpageproperties() - pdf.pageresources = pageresources () - pdf.pageattributes = pageattributes () - pdf.pagesattributes = pagesattributes() + pdfsetpageresources (pageresources ()) + pdfsetpageattributes (pageattributes ()) + pdfsetpagesattributes(pagesattributes()) end local function addtopageresources (k,v) pageresources [k] = v end @@ -606,8 +716,8 @@ end lpdf.registerpagefinalizer = registerpagefinalizer lpdf.registerdocumentfinalizer = registerdocumentfinalizer -function lpdf.finalizepage() - if not environment.initex then +function lpdf.finalizepage(shipout) + if shipout and not environment.initex then -- resetpageproperties() -- maybe better before run(pagefinalizers,"page") setpageproperties() @@ -625,9 +735,27 @@ function lpdf.finalizedocument() end end -backends.pdf.codeinjections.finalizepage = lpdf.finalizepage -- will go when we have hook +-- codeinjections.finalizepage = lpdf.finalizepage -- no longer triggered at the tex end + +if not callbacks.register("finish_pdfpage", lpdf.finalizepage) then + + local find_tail = nodes.tail + local latelua_node = nodes.pool.latelua + + function nodeinjections.finalizepage(head) + local t = find_tail(head.list) + if t then + local n = latelua_node("lpdf.finalizepage(true)") -- last in the shipout + t.next = n + n.prev = t + end + return head, true + end + + nodes.tasks.appendaction("shipouts","normalizers","backends.pdf.nodeinjections.finalizepage") + +end ---~ callbacks.register("finish_pdfpage", lpdf.finalizepage) callbacks.register("finish_pdffile", lpdf.finalizedocument) -- some minimal tracing, handy for checking the order @@ -647,15 +775,34 @@ lpdf.protectresources = true local catalog = pdfdictionary { Type = pdfconstant("Catalog") } -- nicer, but when we assign we nil the Type local info = pdfdictionary { Type = pdfconstant("Info") } -- nicer, but when we assign we nil the Type -local names = pdfdictionary { Type = pdfconstant("Names") } -- nicer, but when we assign we nil the Type +----- names = pdfdictionary { Type = pdfconstant("Names") } -- nicer, but when we assign we nil the Type -local function flushcatalog() if not environment.initex then trace_flush("catalog") catalog.Type = nil pdf.catalog = catalog() end end -local function flushinfo () if not environment.initex then trace_flush("info") info .Type = nil pdf.info = info () end end -local function flushnames () if not environment.initex then trace_flush("names") names .Type = nil pdf.names = names () end end +local function flushcatalog() if not environment.initex then trace_flush("catalog") catalog.Type = nil pdfsetcatalog(catalog()) end end +local function flushinfo () if not environment.initex then trace_flush("info") info .Type = nil pdfsetinfo (info ()) end end +-------------- flushnames () if not environment.initex then trace_flush("names") names .Type = nil pdfsetnames (names ()) end end function lpdf.addtocatalog(k,v) if not (lpdf.protectresources and catalog[k]) then trace_set("catalog",k) catalog[k] = v end end function lpdf.addtoinfo (k,v) if not (lpdf.protectresources and info [k]) then trace_set("info", k) info [k] = v end end -function lpdf.addtonames (k,v) if not (lpdf.protectresources and names [k]) then trace_set("names", k) names [k] = v end end +-------- lpdf.addtonames (k,v) if not (lpdf.protectresources and names [k]) then trace_set("names", k) names [k] = v end end + +local names = pdfdictionary { + -- Type = pdfconstant("Names") +} + +local function flushnames() + if next(names) and not environment.initex then + names.Type = pdfconstant("Names") + trace_flush("names") + lpdf.addtocatalog("Names",pdfreference(pdfimmediateobject(tostring(names)))) + end +end + +function lpdf.addtonames(k,v) + if not (lpdf.protectresources and names [k]) then + trace_set("names", k) + names [k] = v + end +end local dummy = pdfreserveobject() -- else bug in hvmd due so some internal luatex conflict @@ -705,9 +852,9 @@ registerdocumentfinalizer(flushcolorspaces,3,"color spaces") registerdocumentfinalizer(flushpatterns,3,"patterns") registerdocumentfinalizer(flushshades,3,"shades") +registerdocumentfinalizer(flushnames,3,"names") -- before catalog registerdocumentfinalizer(flushcatalog,3,"catalog") registerdocumentfinalizer(flushinfo,3,"info") -registerdocumentfinalizer(flushnames,3,"names") -- before catalog registerpagefinalizer(checkextgstates,3,"extended graphic states") registerpagefinalizer(checkcolorspaces,3,"color spaces") @@ -718,7 +865,7 @@ registerpagefinalizer(checkshades,3,"shades") function lpdf.rotationcm(a) local s, c = sind(a), cosd(a) - return format("%0.6f %0.6f %0.6f %0.6f 0 0 cm",c,s,-s,c) + return format("%0.6F %0.6F %0.6F %0.6F 0 0 cm",c,s,-s,c) end -- ! -> universaltime @@ -795,29 +942,56 @@ end -- lpdf.addtoinfo("ConTeXt.Jobname", environment.jobname) -- lpdf.addtoinfo("ConTeXt.Url", "www.pragma-ade.com") -if not pdfreferenceobject then - - local delayed = { } - - local function flush() - local n = 0 - for k,v in next, delayed do - pdfimmediateobject(k,v) - n = n + 1 - end - if trace_objects then - report_objects("%s objects flushed",n) - end - delayed = { } - end - - lpdf.registerdocumentfinalizer(flush,3,"objects") -- so we need a final flush too - lpdf.registerpagefinalizer (flush,3,"objects") -- somehow this lags behind .. I need to look into that some day - - function lpdf.delayedobject(data) - local n = pdfreserveobject() - delayed[n] = data - return n - end +-- if not pdfreferenceobject then +-- +-- local delayed = { } +-- +-- local function flush() +-- local n = 0 +-- for k,v in next, delayed do +-- pdfimmediateobject(k,v) +-- n = n + 1 +-- end +-- if trace_objects then +-- report_objects("%s objects flushed",n) +-- end +-- delayed = { } +-- end +-- +-- lpdf.registerdocumentfinalizer(flush,3,"objects") -- so we need a final flush too +-- lpdf.registerpagefinalizer (flush,3,"objects") -- somehow this lags behind .. I need to look into that some day +-- +-- function lpdf.delayedobject(data) +-- local n = pdfreserveobject() +-- delayed[n] = data +-- return n +-- end +-- +-- end -end +-- setmetatable(pdf, { +-- __index = function(t,k) +-- if k == "info" then return pdf.getinfo() +-- elseif k == "catalog" then return pdf.getcatalog() +-- elseif k == "names" then return pdf.getnames() +-- elseif k == "trailer" then return pdf.gettrailer() +-- elseif k == "pageattribute" then return pdf.getpageattribute() +-- elseif k == "pageattributes" then return pdf.getpageattributes() +-- elseif k == "pageresources" then return pdf.getpageresources() +-- elseif +-- return nil +-- end +-- end, +-- __newindex = function(t,k,v) +-- if k == "info" then return pdf.setinfo(v) +-- elseif k == "catalog" then return pdf.setcatalog(v) +-- elseif k == "names" then return pdf.setnames(v) +-- elseif k == "trailer" then return pdf.settrailer(v) +-- elseif k == "pageattribute" then return pdf.setpageattribute(v) +-- elseif k == "pageattributes" then return pdf.setpageattributes(v) +-- elseif k == "pageresources" then return pdf.setpageresources(v) +-- else +-- rawset(t,k,v) +-- end +-- end, +-- }) diff --git a/tex/context/base/lpdf-mis.lua b/tex/context/base/lpdf-mis.lua index 43f6cb7e1..6efbd3882 100644 --- a/tex/context/base/lpdf-mis.lua +++ b/tex/context/base/lpdf-mis.lua @@ -16,7 +16,7 @@ if not modules then modules = { } end modules ['lpdf-mis'] = { -- course there are a couple of more changes. local next, tostring = next, tostring -local format, gsub = string.format, string.gsub +local format, gsub, formatters = string.format, string.gsub, string.formatters local texset = tex.set local backends, lpdf, nodes = backends, lpdf, nodes @@ -41,6 +41,14 @@ local pdfverbose = lpdf.verbose local pdfstring = lpdf.string local pdfflushobject = lpdf.flushobject local pdfflushstreamobject = lpdf.flushstreamobject +local pdfaction = lpdf.action + +local formattedtimestamp = lpdf.pdftimestamp +local adddocumentextgstate = lpdf.adddocumentextgstate +local addtocatalog = lpdf.addtocatalog +local addtoinfo = lpdf.addtoinfo +local addtopageattributes = lpdf.addtopageattributes +local addtonames = lpdf.addtonames local variables = interfaces.variables local v_stop = variables.stop @@ -60,8 +68,8 @@ local function initializenegative() } local negative = pdfdictionary { Type = g, TR = pdfreference(pdfflushstreamobject("{ 1 exch sub }",d)) } local positive = pdfdictionary { Type = g, TR = pdfconstant("Identity") } - lpdf.adddocumentextgstate("GSnegative", pdfreference(pdfflushobject(negative))) - lpdf.adddocumentextgstate("GSpositive", pdfreference(pdfflushobject(positive))) + adddocumentextgstate("GSnegative", pdfreference(pdfflushobject(negative))) + adddocumentextgstate("GSpositive", pdfreference(pdfflushobject(positive))) initializenegative = nil end @@ -69,8 +77,8 @@ local function initializeoverprint() local g = pdfconstant("ExtGState") local knockout = pdfdictionary { Type = g, OP = false, OPM = 0 } local overprint = pdfdictionary { Type = g, OP = true, OPM = 1 } - lpdf.adddocumentextgstate("GSknockout", pdfreference(pdfflushobject(knockout))) - lpdf.adddocumentextgstate("GSoverprint", pdfreference(pdfflushobject(overprint))) + adddocumentextgstate("GSknockout", pdfreference(pdfflushobject(knockout))) + adddocumentextgstate("GSoverprint", pdfreference(pdfflushobject(overprint))) initializeoverprint = nil end @@ -92,8 +100,6 @@ function nodeinjections.negative() return copy_node(negative) end --- - -- function codeinjections.addtransparencygroup() -- -- png: /CS /DeviceRGB /I true -- local d = pdfdictionary { @@ -101,7 +107,7 @@ end -- I = true, -- K = true, -- } --- lpdf.registerpagefinalizer(function() lpdf.addtopageattributes("Group",d) end) -- hm +-- lpdf.registerpagefinalizer(function() addtopageattributes("Group",d) end) -- hm -- end -- actions (todo: store and update when changed) @@ -126,10 +132,10 @@ end local function flushdocumentactions() if opendocument then - lpdf.addtocatalog("OpenAction",lpdf.action(opendocument)) + addtocatalog("OpenAction",pdfaction(opendocument)) end if closedocument then - lpdf.addtocatalog("CloseAction",lpdf.action(closedocument)) + addtocatalog("CloseAction",pdfaction(closedocument)) end end @@ -137,12 +143,12 @@ local function flushpageactions() if openpage or closepage then local d = pdfdictionary() if openpage then - d.O = lpdf.action(openpage) + d.O = pdfaction(openpage) end if closepage then - d.C = lpdf.action(closepage) + d.C = pdfaction(closepage) end - lpdf.addtopageattributes("AA",d) + addtopageattributes("AA",d) end end @@ -169,37 +175,37 @@ local function setupidentity() if not title or title == "" then title = tex.jobname end - lpdf.addtoinfo("Title", pdfunicode(title), title) + addtoinfo("Title", pdfunicode(title), title) local subtitle = identity.subtitle or "" if subtitle ~= "" then - lpdf.addtoinfo("Subject", pdfunicode(subtitle), subtitle) + addtoinfo("Subject", pdfunicode(subtitle), subtitle) end local author = identity.author or "" if author ~= "" then - lpdf.addtoinfo("Author", pdfunicode(author), author) -- '/Author' in /Info, 'Creator' in XMP + addtoinfo("Author", pdfunicode(author), author) -- '/Author' in /Info, 'Creator' in XMP end local creator = identity.creator or "" if creator ~= "" then - lpdf.addtoinfo("Creator", pdfunicode(creator), creator) -- '/Creator' in /Info, 'CreatorTool' in XMP + addtoinfo("Creator", pdfunicode(creator), creator) -- '/Creator' in /Info, 'CreatorTool' in XMP end - lpdf.addtoinfo("CreationDate", pdfstring(lpdf.pdftimestamp(lpdf.timestamp()))) + local currenttimestamp = lpdf.timestamp() + addtoinfo("CreationDate", pdfstring(formattedtimestamp(currenttimestamp))) local date = identity.date or "" - local pdfdate = lpdf.pdftimestamp(date) + local pdfdate = formattedtimestamp(date) if pdfdate then - lpdf.addtoinfo("ModDate", pdfstring(pdfdate), date) + addtoinfo("ModDate", pdfstring(pdfdate), date) else -- users should enter the date in 2010-01-19T23:27:50+01:00 format -- and if not provided that way we use the creation time instead - date = lpdf.timestamp() - lpdf.addtoinfo("ModDate", pdfstring(lpdf.pdftimestamp(date)), date) + addtoinfo("ModDate", pdfstring(formattedtimestamp(currenttimestamp)), currenttimestamp) end local keywords = identity.keywords or "" if keywords ~= "" then keywords = gsub(keywords, "[%s,]+", " ") - lpdf.addtoinfo("Keywords",pdfunicode(keywords), keywords) + addtoinfo("Keywords",pdfunicode(keywords), keywords) end local id = lpdf.id() - lpdf.addtoinfo("ID", pdfstring(id), id) -- needed for pdf/x + addtoinfo("ID", pdfstring(id), id) -- needed for pdf/x done = true else -- no need for a message @@ -226,7 +232,7 @@ local function flushjavascripts() a[#a+1] = pdfstring(name) a[#a+1] = pdfreference(pdfflushobject(j)) end - lpdf.addtonames("JavaScript",pdfreference(pdfflushobject(pdfdictionary{ Names = a }))) + addtonames("JavaScript",pdfreference(pdfflushobject(pdfdictionary{ Names = a }))) end end @@ -285,16 +291,16 @@ local function documentspecification() layout = layout and pdfconstant(layout) fit = fit and pdfdictionary { FitWindow = true } if layout then - lpdf.addtocatalog("PageLayout",layout) + addtocatalog("PageLayout",layout) end if mode then - lpdf.addtocatalog("PageMode",mode) + addtocatalog("PageMode",mode) end if fit then - lpdf.addtocatalog("ViewerPreferences",fit) + addtocatalog("ViewerPreferences",fit) end - lpdf.addtoinfo ("Trapped", pdfconstant("False")) -- '/Trapped' in /Info, 'Trapped' in XMP - lpdf.addtocatalog("Version", pdfconstant(format("1.%s",tex.pdfminorversion))) + addtoinfo ("Trapped", pdfconstant("False")) -- '/Trapped' in /Info, 'Trapped' in XMP + addtocatalog("Version", pdfconstant(format("1.%s",tex.pdfminorversion))) end end @@ -303,7 +309,7 @@ end local factor = number.dimenfactors.bp local function boxvalue(n) -- we could share them - return pdfverbose(format("%0.4f",factor * n)) + return pdfverbose(formatters["%0.4F"](factor * n)) end local function pagespecification() @@ -314,10 +320,10 @@ local function pagespecification() boxvalue(width-leftoffset), boxvalue(pageheight-topoffset), } - lpdf.addtopageattributes("CropBox",box) -- mandate for rendering - lpdf.addtopageattributes("TrimBox",box) -- mandate for pdf/x - -- lpdf.addtopageattributes("BleedBox",box) - -- lpdf.addtopageattributes("ArtBox",box) + addtopageattributes("CropBox",box) -- mandate for rendering + addtopageattributes("TrimBox",box) -- mandate for pdf/x + -- addtopageattributes("BleedBox",box) + -- addtopageattributes("ArtBox",box) end lpdf.registerpagefinalizer(pagespecification,"page specification") @@ -365,7 +371,7 @@ local map = { -- end -- end -- end --- lpdf.addtocatalog("PageLabels", pdfdictionary { Nums = list }) +-- addtocatalog("PageLabels", pdfdictionary { Nums = list }) -- end local function featurecreep() @@ -416,7 +422,7 @@ local function featurecreep() stopped = false end end - lpdf.addtocatalog("PageLabels", pdfdictionary { Nums = list }) + addtocatalog("PageLabels", pdfdictionary { Nums = list }) end lpdf.registerdocumentfinalizer(featurecreep,"featurecreep") diff --git a/tex/context/base/lpdf-mov.lua b/tex/context/base/lpdf-mov.lua index 41db97e0c..87375e4ce 100644 --- a/tex/context/base/lpdf-mov.lua +++ b/tex/context/base/lpdf-mov.lua @@ -11,10 +11,10 @@ local format = string.format local lpdf = lpdf local nodeinjections = backends.pdf.nodeinjections -local pdfannotation_node = nodes.pool.pdfannotation local pdfconstant = lpdf.constant local pdfdictionary = lpdf.dictionary local pdfarray = lpdf.array +local pdfborder = lpdf.border local write_node = node.write function nodeinjections.insertmovie(specification) @@ -31,14 +31,16 @@ function nodeinjections.insertmovie(specification) ShowControls = (specification.controls and true) or false, Mode = (specification["repeat"] and pdfconstant("Repeat")) or nil, } + local bs, bc = pdfborder() local action = pdfdictionary { Subtype = pdfconstant("Movie"), - Border = pdfarray { 0, 0, 0 }, + Border = bs, + C = bc, T = format("movie %s",specification.label), Movie = moviedict, A = controldict, } - write_node(pdfannotation_node(width,height,0,action())) -- test: context(...) + write_node(nodeinjections.annotation(width,height,0,action())) -- test: context(...) end function nodeinjections.insertsound(specification) @@ -51,13 +53,15 @@ function nodeinjections.insertsound(specification) local sounddict = pdfdictionary { F = soundclip.filename } + local bs, bc = pdfborder() local action = pdfdictionary { Subtype = pdfconstant("Movie"), - Border = pdfarray { 0, 0, 0 }, + Border = bs, + C = bc, T = format("sound %s",specification.label), Movie = sounddict, A = controldict, } - write_node(pdfannotation_node(0,0,0,action())) -- test: context(...) + write_node(nodeinjections.annotation(0,0,0,action())) -- test: context(...) end end diff --git a/tex/context/base/lpdf-nod.lua b/tex/context/base/lpdf-nod.lua index 68d7fca90..6295947d0 100644 --- a/tex/context/base/lpdf-nod.lua +++ b/tex/context/base/lpdf-nod.lua @@ -90,10 +90,10 @@ function nodepool.pdfsetmatrix(rx,sx,sy,ry,tx,ty) -- todo: tx ty if rx == 1 and ry == 1 then setfield(t,"data","1 0 0 1") else - setfield(t,"data",formatters["%0.6f 0 0 %0.6f"](rx,ry)) + setfield(t,"data",formatters["%0.6F 0 0 %0.6F"](rx,ry)) end else - setfield(t,"data",formatters["%0.6f %0.6f %0.6f %0.6f"](rx,sx,sy,ry)) + setfield(t,"data",formatters["%0.6F %0.6F %0.6F %0.6F"](rx,sx,sy,ry)) end end return t @@ -103,24 +103,28 @@ nodeinjections.save = nodepool.pdfsave nodeinjections.restore = nodepool.pdfrestore nodeinjections.transform = nodepool.pdfsetmatrix +-- the next one is implemented differently, using latelua + function nodepool.pdfannotation(w,h,d,data,n) - local t = copy_node(pdfannot) - if w and w ~= 0 then - setfield(t,"width",w) - end - if h and h ~= 0 then - setfield(t,"height",h) - end - if d and d ~= 0 then - setfield(t,"depth",d) - end - if n then - setfield(t,"objnum",n) - end - if data and data ~= "" then - setfield(t,"data",data) - end - return t + report("don't use node based annotations!") + os.exit() +-- local t = copy_node(pdfannot) +-- if w and w ~= 0 then +-- setfield(t,"width",w) +-- end +-- if h and h ~= 0 then +-- setfield(t,"height",h) +-- end +-- if d and d ~= 0 then +-- setfield(t,"depth",d) +-- end +-- if n then +-- setfield(t,"objnum",n) +-- end +-- if data and data ~= "" then +-- setfield(t,"data",data) +-- end +-- return t end -- (!) The next code in pdfdest.w is wrong: @@ -137,41 +141,43 @@ end -- so we need to force a matrix. function nodepool.pdfdestination(w,h,d,name,view,n) - local t = copy_node(pdfdest) - local hasdimensions = false - if w and w ~= 0 then - setfield(t,"width",w) - hasdimensions = true - end - if h and h ~= 0 then - setfield(t,"height",h) - hasdimensions = true - end - if d and d ~= 0 then - setfield(t,"depth",d) - hasdimensions = true - end - if n then - setfield(t,"objnum",n) - end - view = views[view] or view or 1 -- fit is default - setfield(t,"dest_id",name) - setfield(t,"dest_type",view) - if hasdimensions and view == 0 then -- xyz - -- see (!) s -> m -> t -> r - -- linked - local s = copy_node(pdfsave) - local m = copy_node(pdfsetmatrix) - local r = copy_node(pdfrestore) - setfield(m,"data","1 0 0 1") - setfield(s,"next",m) - setfield(m,"next",t) - setfield(t,"next",r) - setfield(m,"prev",s) - setfield(t,"prev",m) - setfield(r,"prev",t) - return s -- a list - else - return t - end + report("don't use node based destinations!") + os.exit() +-- local t = copy_node(pdfdest) +-- local hasdimensions = false +-- if w and w ~= 0 then +-- setfield(t,"width",w) +-- hasdimensions = true +-- end +-- if h and h ~= 0 then +-- setfield(t,"height",h) +-- hasdimensions = true +-- end +-- if d and d ~= 0 then +-- setfield(t,"depth",d) +-- hasdimensions = true +-- end +-- if n then +-- setfield(t,"objnum",n) +-- end +-- view = views[view] or view or 1 -- fit is default +-- setfield(t,"dest_id",name) +-- setfield(t,"dest_type",view) +-- if hasdimensions and view == 0 then -- xyz +-- -- see (!) s -> m -> t -> r +-- -- linked +-- local s = copy_node(pdfsave) +-- local m = copy_node(pdfsetmatrix) +-- local r = copy_node(pdfrestore) +-- setfield(m,"data","1 0 0 1") +-- setfield(s,"next",m) +-- setfield(m,"next",t) +-- setfield(t,"next",r) +-- setfield(m,"prev",s) +-- setfield(t,"prev",m) +-- setfield(r,"prev",t) +-- return s -- a list +-- else +-- return t +-- end end diff --git a/tex/context/base/lpdf-ren.lua b/tex/context/base/lpdf-ren.lua index 6af65f9de..37b706420 100644 --- a/tex/context/base/lpdf-ren.lua +++ b/tex/context/base/lpdf-ren.lua @@ -15,47 +15,51 @@ local settings_to_array = utilities.parsers.settings_to_array local backends, lpdf, nodes, node = backends, lpdf, nodes, node -local nodeinjections = backends.pdf.nodeinjections -local codeinjections = backends.pdf.codeinjections -local registrations = backends.pdf.registrations -local viewerlayers = attributes.viewerlayers +local nodeinjections = backends.pdf.nodeinjections +local codeinjections = backends.pdf.codeinjections +local registrations = backends.pdf.registrations +local viewerlayers = attributes.viewerlayers -local references = structures.references +local references = structures.references -references.executers = references.executers or { } -local executers = references.executers +references.executers = references.executers or { } +local executers = references.executers -local variables = interfaces.variables +local variables = interfaces.variables -local v_no = variables.no -local v_yes = variables.yes -local v_start = variables.start -local v_stop = variables.stop -local v_reset = variables.reset -local v_auto = variables.auto -local v_random = variables.random +local v_no = variables.no +local v_yes = variables.yes +local v_start = variables.start +local v_stop = variables.stop +local v_reset = variables.reset +local v_auto = variables.auto +local v_random = variables.random -local pdfconstant = lpdf.constant -local pdfdictionary = lpdf.dictionary -local pdfarray = lpdf.array -local pdfreference = lpdf.reference -local pdfflushobject = lpdf.flushobject -local pdfreserveobject = lpdf.reserveobject +local pdfconstant = lpdf.constant +local pdfdictionary = lpdf.dictionary +local pdfarray = lpdf.array +local pdfreference = lpdf.reference +local pdfflushobject = lpdf.flushobject +local pdfreserveobject = lpdf.reserveobject -local nodepool = nodes.pool -local register = nodepool.register -local pdfliteral = nodepool.pdfliteral +local addtopageattributes = lpdf.addtopageattributes +local addtopageresources = lpdf.addtopageresources +local addtocatalog = lpdf.addtocatalog -local pdf_ocg = pdfconstant("OCG") -local pdf_ocmd = pdfconstant("OCMD") -local pdf_off = pdfconstant("OFF") -local pdf_on = pdfconstant("ON") -local pdf_toggle = pdfconstant("Toggle") -local pdf_setocgstate = pdfconstant("SetOCGState") +local nodepool = nodes.pool +local register = nodepool.register +local pdfliteral = nodepool.pdfliteral -local copy_node = node.copy +local pdf_ocg = pdfconstant("OCG") +local pdf_ocmd = pdfconstant("OCMD") +local pdf_off = pdfconstant("OFF") +local pdf_on = pdfconstant("ON") +local pdf_toggle = pdfconstant("Toggle") +local pdf_setocgstate = pdfconstant("SetOCGState") -local lpdf_usage = pdfdictionary { Print = pdfdictionary { PrintState = pdf_off } } +local copy_node = node.copy + +local lpdf_usage = pdfdictionary { Print = pdfdictionary { PrintState = pdf_off } } -- We can have references to layers before they are places, for instance from -- hide and vide actions. This is why we need to be able to force usage of layers @@ -163,7 +167,7 @@ local function flushtextlayers() BaseState = pdf_on, }, } - lpdf.addtocatalog("OCProperties",d) + addtocatalog("OCProperties",d) textlayers = nil end end @@ -171,7 +175,7 @@ end local function flushpagelayers() -- we can share these if pagelayers then - lpdf.addtopageresources("Properties",pdfreference(pagelayersreference)) -- we could cache this + addtopageresources("Properties",pdfreference(pagelayersreference)) -- we could cache this end end @@ -342,8 +346,8 @@ function codeinjections.setpagetransition(specification) end delay = tonumber(delay) if delay and delay > 0 then - lpdf.addtopageattributes("Dur",delay) + addtopageattributes("Dur",delay) end - lpdf.addtopageattributes("Trans",d) + addtopageattributes("Trans",d) end end diff --git a/tex/context/base/lpdf-swf.lua b/tex/context/base/lpdf-swf.lua index 12c80036f..88cdcc4ec 100644 --- a/tex/context/base/lpdf-swf.lua +++ b/tex/context/base/lpdf-swf.lua @@ -28,8 +28,6 @@ local checkedkey = lpdf.checkedkey local codeinjections = backends.pdf.codeinjections local nodeinjections = backends.pdf.nodeinjections -local pdfannotation_node = nodes.pool.pdfannotation - local trace_swf = false trackers.register("backend.swf", function(v) trace_swf = v end) local report_swf = logs.reporter("backend","swf") @@ -302,5 +300,5 @@ function backends.pdf.nodeinjections.insertswf(spec) -- factor = spec.factor, -- label = spec.label, } - context(pdfannotation_node(spec.width,spec.height,0,annotation())) -- the context wrap is probably also needed elsewhere + context(nodeinjections.annotation(spec.width,spec.height,0,annotation())) -- the context wrap is probably also needed elsewhere end diff --git a/tex/context/base/lpdf-tag.lua b/tex/context/base/lpdf-tag.lua index afddec345..276816e80 100644 --- a/tex/context/base/lpdf-tag.lua +++ b/tex/context/base/lpdf-tag.lua @@ -15,75 +15,78 @@ local trace_tags = false trackers.register("structures.tags", function(v) trace local report_tags = logs.reporter("backend","tags") -local backends = backends -local lpdf = lpdf -local nodes = nodes - -local nodeinjections = backends.pdf.nodeinjections -local codeinjections = backends.pdf.codeinjections - -local tasks = nodes.tasks - -local pdfdictionary = lpdf.dictionary -local pdfarray = lpdf.array -local pdfboolean = lpdf.boolean -local pdfconstant = lpdf.constant -local pdfreference = lpdf.reference -local pdfunicode = lpdf.unicode -local pdfstring = lpdf.string -local pdfflushobject = lpdf.flushobject -local pdfreserveobject = lpdf.reserveobject -local pdfpagereference = lpdf.pagereference - -local texgetcount = tex.getcount - -local nodecodes = nodes.nodecodes - -local hlist_code = nodecodes.hlist -local vlist_code = nodecodes.vlist -local glyph_code = nodecodes.glyph - -local a_tagged = attributes.private('tagged') -local a_image = attributes.private('image') - -local nuts = nodes.nuts -local tonut = nuts.tonut -local tonode = nuts.tonode - -local nodepool = nuts.pool -local pdfliteral = nodepool.pdfliteral - -local getid = nuts.getid -local getattr = nuts.getattr -local getprev = nuts.getprev -local getnext = nuts.getnext -local getlist = nuts.getlist -local setfield = nuts.setfield - -local traverse_nodes = nuts.traverse -local tosequence = nuts.tosequence -local copy_node = nuts.copy -local slide_nodelist = nuts.slide -local insert_before = nuts.insert_before -local insert_after = nuts.insert_after - -local structure_stack = { } -local structure_kids = pdfarray() -local structure_ref = pdfreserveobject() -local parent_ref = pdfreserveobject() -local root = { pref = pdfreference(structure_ref), kids = structure_kids } -local tree = { } -local elements = { } -local names = pdfarray() -local taglist = structures.tags.taglist -local usedlabels = structures.tags.labels -local properties = structures.tags.properties -local usedmapping = { } - -local colonsplitter = lpeg.splitat(":") -local dashsplitter = lpeg.splitat("-") - -local add_ids = false -- true +local backends = backends +local lpdf = lpdf +local nodes = nodes + +local nodeinjections = backends.pdf.nodeinjections +local codeinjections = backends.pdf.codeinjections + +local tasks = nodes.tasks + +local pdfdictionary = lpdf.dictionary +local pdfarray = lpdf.array +local pdfboolean = lpdf.boolean +local pdfconstant = lpdf.constant +local pdfreference = lpdf.reference +local pdfunicode = lpdf.unicode +local pdfstring = lpdf.string +local pdfflushobject = lpdf.flushobject +local pdfreserveobject = lpdf.reserveobject +local pdfpagereference = lpdf.pagereference + +local addtocatalog = lpdf.addtocatalog +local addtopageattributes = lpdf.addtopageattributes + +local texgetcount = tex.getcount + +local nodecodes = nodes.nodecodes + +local hlist_code = nodecodes.hlist +local vlist_code = nodecodes.vlist +local glyph_code = nodecodes.glyph + +local a_tagged = attributes.private('tagged') +local a_image = attributes.private('image') + +local nuts = nodes.nuts +local tonut = nuts.tonut +local tonode = nuts.tonode + +local nodepool = nuts.pool +local pdfliteral = nodepool.pdfliteral + +local getid = nuts.getid +local getattr = nuts.getattr +local getprev = nuts.getprev +local getnext = nuts.getnext +local getlist = nuts.getlist +local setfield = nuts.setfield + +local traverse_nodes = nuts.traverse +local tosequence = nuts.tosequence +local copy_node = nuts.copy +local slide_nodelist = nuts.slide +local insert_before = nuts.insert_before +local insert_after = nuts.insert_after + +local structure_stack = { } +local structure_kids = pdfarray() +local structure_ref = pdfreserveobject() +local parent_ref = pdfreserveobject() +local root = { pref = pdfreference(structure_ref), kids = structure_kids } +local tree = { } +local elements = { } +local names = pdfarray() +local taglist = structures.tags.taglist +local usedlabels = structures.tags.labels +local properties = structures.tags.properties +local usedmapping = { } + +local colonsplitter = lpeg.splitat(":") +local dashsplitter = lpeg.splitat("-") + +local add_ids = false -- true -- function codeinjections.maptag(original,target,kind) -- mapping[original] = { target, kind or "inline" } @@ -124,14 +127,14 @@ local function finishstructure() RoleMap = rolemap, } pdfflushobject(structure_ref,structuretree) - lpdf.addtocatalog("StructTreeRoot",pdfreference(structure_ref)) + addtocatalog("StructTreeRoot",pdfreference(structure_ref)) -- local markinfo = pdfdictionary { Marked = pdfboolean(true), -- UserProperties = pdfboolean(true), -- Suspects = pdfboolean(true), } - lpdf.addtocatalog("MarkInfo",pdfreference(pdfflushobject(markinfo))) + addtocatalog("MarkInfo",pdfreference(pdfflushobject(markinfo))) -- for fulltag, element in next, elements do pdfflushobject(element.knum,element.kids) @@ -156,7 +159,7 @@ end local function finishpage() -- flush what can be flushed - lpdf.addtopageattributes("StructParents",pagenum-1) + addtopageattributes("StructParents",pagenum-1) end -- here we can flush and free elements that are finished diff --git a/tex/context/base/lpdf-u3d.lua b/tex/context/base/lpdf-u3d.lua index 33269486c..f0fca0762 100644 --- a/tex/context/base/lpdf-u3d.lua +++ b/tex/context/base/lpdf-u3d.lua @@ -17,6 +17,7 @@ if not modules then modules = { } end modules ['lpdf-u3d'] = { -- point we will end up with a reimplementation. For instance -- it makes sense to add the same activation code as with swf. +local tonumber = tonumber local format, find = string.format, string.find local cos, sin, sqrt, pi, atan2, abs = math.cos, math.sin, math.sqrt, math.pi, math.atan2, math.abs @@ -38,8 +39,6 @@ local pdfflushstreamfileobject = lpdf.flushstreamfileobject local checkedkey = lpdf.checkedkey local limited = lpdf.limited -local pdfannotation_node = nodes.pool.pdfannotation - local schemes = table.tohash { "Artwork", "None", "White", "Day", "Night", "Hard", "Primary", "Blue", "Red", "Cube", "CAD", "Headlamp", @@ -462,7 +461,7 @@ local function insert3d(spec) -- width, height, factor, display, controls, label }, ProcSet = pdfarray { pdfconstant("PDF"), pdfconstant("ImageC") }, } - local pwd = pdfflushstreamobject(format("q /GS gs %f 0 0 %f 0 0 cm /IM Do Q",factor*width,factor*height),pw) + local pwd = pdfflushstreamobject(format("q /GS gs %F 0 0 %F 0 0 cm /IM Do Q",factor*width,factor*height),pw) annot.AP = pdfdictionary { N = pdfreference(pwd) } @@ -484,5 +483,5 @@ function nodeinjections.insertu3d(spec) controls = spec.controls, label = spec.label, } - node.write(pdfannotation_node(spec.width,spec.height,0,annotation())) + node.write(nodeinjections.annotation(spec.width,spec.height,0,annotation())) end diff --git a/tex/context/base/lpdf-wid.lua b/tex/context/base/lpdf-wid.lua index 11ac82a08..895bbd3ff 100644 --- a/tex/context/base/lpdf-wid.lua +++ b/tex/context/base/lpdf-wid.lua @@ -46,20 +46,18 @@ local pdfcolorspec = lpdf.colorspec local pdfflushobject = lpdf.flushobject local pdfflushstreamobject = lpdf.flushstreamobject local pdfflushstreamfileobject = lpdf.flushstreamfileobject -local pdfreserveannotation = lpdf.reserveannotation local pdfreserveobject = lpdf.reserveobject local pdfpagereference = lpdf.pagereference local pdfshareobjectreference = lpdf.shareobjectreference +local pdfaction = lpdf.action +local pdfborder = lpdf.border -local nodepool = nodes.pool - -local pdfannotation_node = nodepool.pdfannotation +local pdftransparencyvalue = lpdf.transparencyvalue +local pdfcolorvalues = lpdf.colorvalues local hpack_node = node.hpack local write_node = node.write -- test context(...) instead -local pdf_border = pdfarray { 0, 0, 0 } -- can be shared - -- symbols local presets = { } -- xforms @@ -117,8 +115,8 @@ codeinjections.presetsymbollist = presetsymbollist -- } local attachment_symbols = { - Graph = pdfconstant("GraphPushPin"), - Paperclip = pdfconstant("PaperclipTag"), + Graph = pdfconstant("Graph"), + Paperclip = pdfconstant("Paperclip"), Pushpin = pdfconstant("PushPin"), } @@ -170,12 +168,12 @@ end local function analyzecolor(colorvalue,colormodel) local cvalue = colorvalue and tonumber(colorvalue) local cmodel = colormodel and tonumber(colormodel) or 3 - return cvalue and pdfarray { lpdf.colorvalues(cmodel,cvalue) } or nil + return cvalue and pdfarray { pdfcolorvalues(cmodel,cvalue) } or nil end local function analyzetransparency(transparencyvalue) local tvalue = transparencyvalue and tonumber(transparencyvalue) - return tvalue and lpdf.transparencyvalue(tvalue) or nil + return tvalue and pdftransparencyvalue(tvalue) or nil end -- Attachments @@ -342,7 +340,7 @@ function nodeinjections.attachfile(specification) OC = analyzelayer(specification.layer), } local width, height, depth = specification.width or 0, specification.height or 0, specification.depth - local box = hpack_node(pdfannotation_node(width,height,depth,d())) + local box = hpack_node(nodeinjections.annotation(width,height,depth,d())) box.width, box.height, box.depth = width, height, depth return box end @@ -427,19 +425,19 @@ function nodeinjections.comment(specification) -- brrr: seems to be done twice local box if usepopupcomments then -- rather useless as we can hide/vide - local nd = pdfreserveannotation() - local nc = pdfreserveannotation() + local nd = pdfreserveobject() + local nc = pdfreserveobject() local c = pdfdictionary { Subtype = pdfconstant("Popup"), Parent = pdfreference(nd), } d.Popup = pdfreference(nc) box = hpack_node( - pdfannotation_node(0,0,0,d(),nd), - pdfannotation_node(width,height,depth,c(),nc) + nodeinjections.annotation(0,0,0,d(),nd), + nodeinjections.annotation(width,height,depth,c(),nc) ) else - box = hpack_node(pdfannotation_node(width,height,depth,d())) + box = hpack_node(nodeinjections.annotation(width,height,depth,d())) end box.width, box.height, box.depth = width, height, depth -- redundant return box @@ -484,7 +482,7 @@ end local ms, mu, mf = { }, { }, { } local function delayed(label) - local a = pdfreserveannotation() + local a = pdfreserveobject() mu[label] = a return pdfreference(a) end @@ -504,23 +502,25 @@ local function insertrenderingwindow(specification) local actions = nil if openpage or closepage then actions = pdfdictionary { - PO = (openpage and lpdf.action(openpage )) or nil, - PC = (closepage and lpdf.action(closepage)) or nil, + PO = (openpage and lpdfaction(openpage )) or nil, + PC = (closepage and lpdfaction(closepage)) or nil, } end local page = tonumber(specification.page) or texgetcount("realpageno") -- todo - local r = mu[label] or pdfreserveannotation() -- why the reserve here? + local r = mu[label] or pdfreserveobject() -- why the reserve here? local a = pdfdictionary { S = pdfconstant("Rendition"), R = mf[label], OP = 0, AN = pdfreference(r), } + local bs, bc = pdfborder() local d = pdfdictionary { Subtype = pdfconstant("Screen"), P = pdfreference(pdfpagereference(page)), A = a, -- needed in order to make the annotation clickable (i.e. don't bark) - Border = pdf_border, + Border = bs, + C = bc, AA = actions, } local width = specification.width or 0 @@ -528,7 +528,7 @@ local function insertrenderingwindow(specification) if height == 0 or width == 0 then -- todo: sound needs no window end - write_node(pdfannotation_node(width,height,0,d(),r)) -- save ref + write_node(nodeinjections.annotation(width,height,0,d(),r)) -- save ref return pdfreference(r) end @@ -539,7 +539,7 @@ local function insertrendering(specification) local option = settings_to_hash(specification.option) if not mf[label] then local filename = specification.filename - local isurl = find(filename,"://") + local isurl = find(filename,"://",1,true) -- local start = pdfdictionary { -- Type = pdfconstant("MediaOffset"), -- S = pdfconstant("T"), -- time diff --git a/tex/context/base/lpdf-xmp.lua b/tex/context/base/lpdf-xmp.lua index 061ed0757..c8b2d236c 100644 --- a/tex/context/base/lpdf-xmp.lua +++ b/tex/context/base/lpdf-xmp.lua @@ -7,6 +7,7 @@ if not modules then modules = { } end modules ['lpdf-xmp'] = { comment = "with help from Peter Rolf", } +local tostring = tostring local format, random, char, gsub, concat = string.format, math.random, string.char, string.gsub, table.concat local xmlfillin = xml.fillin @@ -119,16 +120,16 @@ end -- redefined -local addtoinfo = lpdf.addtoinfo -local addxmpinfo = lpdf.addxmpinfo +local pdfaddtoinfo = lpdf.addtoinfo +local pdfaddxmpinfo = lpdf.addxmpinfo function lpdf.addtoinfo(tag,pdfvalue,strvalue) - addtoinfo(tag,pdfvalue) + pdfaddtoinfo(tag,pdfvalue) local value = strvalue or gsub(tostring(pdfvalue),"^%((.*)%)$","%1") -- hack if trace_info then report_info("set %a to %a",tag,value) end - addxmpinfo(tag,value) + pdfaddxmpinfo(tag,value) end -- for the do-it-yourselvers @@ -159,20 +160,20 @@ local function flushxmpinfo() local fullbanner = tex.pdftexbanner -- local fullbanner = gsub(tex.pdftexbanner,"kpse.*","") - addxmpinfo("DocumentID", documentid) - addxmpinfo("InstanceID", instanceid) - addxmpinfo("Producer", producer) - addxmpinfo("CreatorTool", creator) - addxmpinfo("CreateDate", time) - addxmpinfo("ModifyDate", time) - addxmpinfo("MetadataDate", time) - addxmpinfo("PTEX.Fullbanner", fullbanner) - - addtoinfo("Producer", producer) - addtoinfo("Creator", creator) - addtoinfo("CreationDate", time) - addtoinfo("ModDate", time) --- addtoinfo("PTEX.Fullbanner", fullbanner) -- no checking done on existence + pdfaddxmpinfo("DocumentID", documentid) + pdfaddxmpinfo("InstanceID", instanceid) + pdfaddxmpinfo("Producer", producer) + pdfaddxmpinfo("CreatorTool", creator) + pdfaddxmpinfo("CreateDate", time) + pdfaddxmpinfo("ModifyDate", time) + pdfaddxmpinfo("MetadataDate", time) + pdfaddxmpinfo("PTEX.Fullbanner", fullbanner) + + pdfaddtoinfo("Producer", producer) + pdfaddtoinfo("Creator", creator) + pdfaddtoinfo("CreationDate", time) + pdfaddtoinfo("ModDate", time) +-- pdfaddtoinfo("PTEX.Fullbanner", fullbanner) -- no checking done on existence local blob = xml.tostring(xml.first(xmp or valid_xmp(),"/x:xmpmeta")) local md = pdfdictionary { diff --git a/tex/context/base/luat-cbk.lua b/tex/context/base/luat-cbk.lua index 4f044f9ac..8c224ad2c 100644 --- a/tex/context/base/luat-cbk.lua +++ b/tex/context/base/luat-cbk.lua @@ -118,7 +118,7 @@ end function callbacks.freeze(name,freeze) freeze = type(freeze) == "string" and freeze - if find(name,"%*") then + if find(name,"*",1,true) then local pattern = name for name, _ in next, list do if find(name,pattern) then diff --git a/tex/context/base/luat-cnf.lua b/tex/context/base/luat-cnf.lua index 4ad6cd69d..fba2b71d1 100644 --- a/tex/context/base/luat-cnf.lua +++ b/tex/context/base/luat-cnf.lua @@ -134,13 +134,14 @@ function texconfig.init() -- shortcut and helper + local bytecode = lua.bytecode + local function init(start) - local b = lua.bytecode local i = start local t = os.clock() - while b[i] do - b[i]() ; - b[i] = nil ; + while bytecode[i] do + bytecode[i]() ; + bytecode[i] = nil ; i = i + 1 -- collectgarbage('step') end @@ -159,6 +160,8 @@ function texconfig.init() end end + texconfig.init = function() end + end -- we provide a qualified path diff --git a/tex/context/base/luat-cod.lua b/tex/context/base/luat-cod.lua index 8b015477f..c16a3b110 100644 --- a/tex/context/base/luat-cod.lua +++ b/tex/context/base/luat-cod.lua @@ -51,6 +51,9 @@ function lua.registercode(filename,version) bytecode[n] = code lua.lastbytecode = n end + elseif environment.initex then + texio.write_nl("\nerror loading file: " .. filename .. " (aborting)") + os.exit() end end end @@ -85,7 +88,7 @@ local environment = environment -- no string.unquoted yet local sourcefile = gsub(arg and arg[1] or "","^\"(.*)\"$","%1") -local sourcepath = find(sourcefile,"/") and gsub(sourcefile,"/[^/]+$","") or "" +local sourcepath = find(sourcefile,"/",1,true) and gsub(sourcefile,"/[^/]+$","") or "" local targetpath = "." -- delayed (via metatable): diff --git a/tex/context/base/luat-env.lua b/tex/context/base/luat-env.lua index 5558e0303..5f2a0d281 100644 --- a/tex/context/base/luat-env.lua +++ b/tex/context/base/luat-env.lua @@ -102,14 +102,20 @@ function environment.luafilechunk(filename,silent) -- used for loading lua bytec local fullname = environment.luafile(filename) if fullname and fullname ~= "" then local data = luautilities.loadedluacode(fullname,strippable,filename) -- can be overloaded - if trace_locating then +-- if trace_locating then +-- report_lua("loading file %a %s",fullname,not data and "failed" or "succeeded") +-- elseif not silent then +-- texio.write("<",data and "+ " or "- ",fullname,">") +-- end + if not silent then report_lua("loading file %a %s",fullname,not data and "failed" or "succeeded") - elseif not silent then - texio.write("<",data and "+ " or "- ",fullname,">") end return data else - if trace_locating then +-- if trace_locating then +-- report_lua("unknown file %a",filename) +-- end + if not silent then report_lua("unknown file %a",filename) end return nil diff --git a/tex/context/base/luat-ini.lua b/tex/context/base/luat-ini.lua index 587214b93..9303b614a 100644 --- a/tex/context/base/luat-ini.lua +++ b/tex/context/base/luat-ini.lua @@ -72,6 +72,8 @@ lua.messages = lua.messages or { } local messages = lua.messages storage.register("lua/numbers", numbers, "lua.numbers" ) storage.register("lua/messages", messages, "lua.messages") +local f_message = string.formatters["=[instance: %s]"] -- the = controls the lua error / see: lobject.c + local setfenv = setfenv or debug.setfenv -- < 5.2 if setfenv then @@ -183,7 +185,7 @@ elseif libraries then -- assume >= 5.2 messages[lnn] = message numbers[name] = lnn end - luanames[lnn] = message + luanames[lnn] = f_message(message) context(lnn) end @@ -198,7 +200,7 @@ else messages[lnn] = message numbers[name] = lnn end - luanames[lnn] = message + luanames[lnn] = f_message(message) context(lnn) end diff --git a/tex/context/base/luat-ini.mkiv b/tex/context/base/luat-ini.mkiv index a3a590311..7823ebd5a 100644 --- a/tex/context/base/luat-ini.mkiv +++ b/tex/context/base/luat-ini.mkiv @@ -120,23 +120,31 @@ \obeyluatokens \csname\??luacode#1\endcsname} +% \unexpanded\def\definenamedlua[#1]#2[#3]% no optional arg handling here yet / we could use numbers instead (more efficient) +% {\ifcsname\??luacode#1\endcsname \else +% \scratchcounter\ctxlua{lua.registername("#1","#3")}% +% \normalexpanded{\xdef\csname\??luacode#1\endcsname##1\csname\e!stop#1\v!code\endcsname}% +% {\noexpand\normalexpanded{\endgroup\noexpand\directlua\the\scratchcounter{_G=protect("#1\s!data")##1}}}% +% \expandafter\edef\csname\e!start#1\v!code\endcsname {\luat_start_named_lua_code{#1}}% +% \expandafter\edef\csname #1\v!code\endcsname##1{\noexpand\directlua\the\scratchcounter{_G=protect("#1\s!data")##1}}% +% \fi} + \unexpanded\def\definenamedlua[#1]#2[#3]% no optional arg handling here yet / we could use numbers instead (more efficient) {\ifcsname\??luacode#1\endcsname \else - \scratchcounter\ctxlua{lua.registername("#1","#3")}% + \expandafter\chardef\csname\??luacode:#1\endcsname\ctxlua{lua.registername("#1","#3")}% \normalexpanded{\xdef\csname\??luacode#1\endcsname##1\csname\e!stop#1\v!code\endcsname}% - %{\endgroup\noexpand\directlua\the\scratchcounter{local _ENV=protect("#1\s!data")##1}}% - {\noexpand\normalexpanded{\endgroup\noexpand\directlua\the\scratchcounter{local _ENV=protect("#1\s!data")##1}}}% + {\noexpand\normalexpanded{\endgroup\noexpand\directlua\csname\??luacode:#1\endcsname{_G=protect("#1\s!data")##1}}}% \expandafter\edef\csname\e!start#1\v!code\endcsname {\luat_start_named_lua_code{#1}}% - \expandafter\edef\csname #1\v!code\endcsname##1{\noexpand\directlua\the\scratchcounter{local _ENV=protect("#1\s!data")##1}}% + \expandafter\edef\csname #1\v!code\endcsname##1{\noexpand\directlua\csname\??luacode:#1\endcsname{_G=protect("#1\s!data")##1}}% \fi} %D We predefine a few. % \definenamedlua[module][module instance] % not needed -\definenamedlua[user] [private user instance] -\definenamedlua[third] [third party module instance] -\definenamedlua[isolated][isolated instance] +\definenamedlua[user] [private user] +\definenamedlua[third] [third party module] +\definenamedlua[isolated][isolated] %D In practice this works out as follows: %D @@ -266,4 +274,53 @@ \def\luat_lua_code {\normalexpanded{\endgroup\noexpand\directlua\expandafter{\the\scratchtoks}}} % \zerocount is default +% \startctxfunction MyFunctionA +% context(" A1 ") +% \stopctxfunction +% +% \startctxfunctiondefinition MyFunctionB +% context(" B2 ") +% \stopctxfunctiondefinition +% +% \starttext +% \dorecurse{10000}{\ctxfunction{MyFunctionA}} \page +% \dorecurse{10000}{\MyFunctionB} \page +% \dorecurse{10000}{\ctxlua{context(" C3 ")}} \page +% \stoptext + +\installcorenamespace{ctxfunction} + +\normalprotected\def\startctxfunctiondefinition #1 % + {\begingroup \obeylualines \obeyluatokens \luat_start_lua_function_definition_indeed{#1}} + +% \def\luat_start_lua_function_definition_indeed#1#2\stopctxfunctiondefinition +% {\endgroup\expandafter\edef\csname#1\endcsname{\noexpand\luafunction\ctxcommand{ctxfunction(\!!bs#2\!!es)}\relax}} + +\installcorenamespace{luafunction} + +\def\luat_start_lua_function_definition_indeed#1#2\stopctxfunctiondefinition + {\endgroup + \expandafter\chardef\csname\??luafunction#1\endcsname\ctxcommand{ctxfunction(\!!bs#2\!!es)}\relax + \expandafter\edef\csname#1\endcsname{\noexpand\luafunction\csname\??luafunction#1\endcsname}} + +% \unexpanded\def\setctxluafunction#1#2% experiment +% {\expandafter\chardef\csname#1\endcsname#2\relax} + +\unexpanded\def\setctxluafunction#1#2% experiment + {\expandafter\chardef\csname\??luafunction#1\endcsname#2\relax + \expandafter\edef\csname#1\endcsname{\noexpand\luafunction\csname\??luafunction#1\endcsname}} + +\let\stopctxfunctiondefinition\relax + +\normalprotected\def\startctxfunction #1 % + {\begingroup \obeylualines \obeyluatokens \luat_start_lua_function_indeed{#1}} + +\def\luat_start_lua_function_indeed#1#2\stopctxfunction + {\endgroup\expandafter\edef\csname\??ctxfunction#1\endcsname{\noexpand\luafunction\ctxcommand{ctxfunction(\!!bs#2\!!es)}\relax}} + +\let\stopctxfunction\relax + +\def\ctxfunction#1% + {\csname\??ctxfunction#1\endcsname} + \protect \endinput diff --git a/tex/context/base/luat-run.lua b/tex/context/base/luat-run.lua index 719a6f7c9..607c3528a 100644 --- a/tex/context/base/luat-run.lua +++ b/tex/context/base/luat-run.lua @@ -6,8 +6,8 @@ if not modules then modules = { } end modules ['luat-run'] = { license = "see context related readme files" } -local format = string.format -local insert = table.insert +local format, find = string.format, string.find +local insert, remove = table.insert, table.remove -- trace_job_status is also controlled by statistics.enable that is set via the directive system.nostatistics @@ -158,3 +158,75 @@ statistics.register("synctex tracing",function() return "synctex has been enabled (extra log file generated)" end end) + +-- filenames + +local types = { + "data", + "font map", + "image", + "font subset", + "full font", +} + +local report_open = logs.reporter("open source") +local report_close = logs.reporter("close source") +local report_load = logs.reporter("load resource") + +local register = callbacks.register + +local level = 0 +local total = 0 +local stack = { } +local all = false + +local function report_start(left,name) + if not left then + -- skip + elseif left ~= 1 then + if all then + report_load("%s > %s",types[left],name or "?") + end + elseif find(name,"virtual://") then + insert(stack,false) + else + insert(stack,name) + total = total + 1 + level = level + 1 + report_open("%i > %i > %s",level,total,name or "?") + end +end + +local function report_stop(right) + if level == 1 or not right or right == 1 then + local name = remove(stack) + if name then + report_close("%i > %i > %s",level,total,name or "?") + level = level - 1 + end + end +end + +local function report_none() +end + +register("start_file",report_start) +register("stop_file", report_stop) + +directives.register("system.reportfiles", function(v) + if v == "noresources" then + all = false + register("start_file",report_start) + register("stop_file", report_stop) + elseif toboolean(v) or v == "all" then + all = true + register("start_file",report_start) + register("stop_file", report_stop) + elseif v == "traditional" then + register("start_file",nil) + register("stop_file", nil) + else + register("start_file",report_none) + register("stop_file", report_none) + end +end) diff --git a/tex/context/base/luat-sto.lua b/tex/context/base/luat-sto.lua index 041050fb8..b04d655c2 100644 --- a/tex/context/base/luat-sto.lua +++ b/tex/context/base/luat-sto.lua @@ -13,6 +13,7 @@ local gmatch, format = string.gmatch, string.format local serialize, concat, sortedhash = table.serialize, table.concat, table.sortedhash local bytecode = lua.bytecode local strippedloadstring = utilities.lua.strippedloadstring +local formatters = string.formatters local trace_storage = false local report_storage = logs.reporter("system","storage") @@ -48,38 +49,71 @@ function storage.register(...) return t end -local n = 0 -local function dump() - local max = storage.max - for i=1,#data do - local d = data[i] - local message, original, target = d[1], d[2] ,d[3] - local c, code, name = 0, { }, nil - -- we have a nice definer for this - for str in gmatch(target,"([^%.]+)") do - if name then - name = name .. "." .. str +local n = 0 -- is that one used ? + +if environment.initex then + + -- local function dump() + -- local max = storage.max + -- for i=1,#data do + -- local d = data[i] + -- local message, original, target = d[1], d[2] ,d[3] + -- local c, code, name = 0, { }, nil + -- -- we have a nice definer for this + -- for str in gmatch(target,"([^%.]+)") do + -- if name then + -- name = name .. "." .. str + -- else + -- name = str + -- end + -- c = c + 1 ; code[c] = formatters["%s = %s or { }"](name,name) + -- end + -- max = max + 1 + -- if trace_storage then + -- c = c + 1 ; code[c] = formatters["print('restoring %s from slot %s')"](message,max) + -- end + -- c = c + 1 ; code[c] = serialize(original,name) + -- if trace_storage then + -- report_storage('saving %a in slot %a, size %s',message,max,#code[c]) + -- end + -- -- we don't need tracing in such tables + -- bytecode[max] = strippedloadstring(concat(code,"\n"),storage.strip,format("slot %s (%s)",max,name)) + -- collectgarbage("step") + -- end + -- storage.max = max + -- end + + local function dump() + local max = storage.max + local strip = storage.strip + for i=1,#data do + max = max + 1 + local tabledata = data[i] + local message = tabledata[1] + local original = tabledata[2] + local target = tabledata[3] + local definition = utilities.tables.definetable(target,false,true) + local comment = formatters["restoring %s from slot %s"](message,max) + if trace_storage then + comment = formatters["print('%s')"](comment) else - name = str + comment = formatters["-- %s"](comment) end - c = c + 1 ; code[c] = format("%s = %s or { }",name,name) - end - max = max + 1 - if trace_storage then - c = c + 1 ; code[c] = format("print('restoring %s from slot %s')",message,max) - end - c = c + 1 ; code[c] = serialize(original,name) - if trace_storage then - report_storage('saving %a in slot %a, size %s',message,max,#code[c]) + local dumped = serialize(original,target) + if trace_storage then + report_storage('saving %a in slot %a, size %s',message,max,#dumped) + end + -- we don't need tracing in such tables + dumped = concat({ definition, comment, dumped },"\n") + bytecode[max] = strippedloadstring(dumped,strip,formatters["slot %s (%s)"](max,name)) + collectgarbage("step") end - -- we don't need tracing in such tables - bytecode[max] = strippedloadstring(concat(code,"\n"),storage.strip,format("slot %s (%s)",max,name)) - collectgarbage("step") + storage.max = max end - storage.max = max -end -lua.registerfinalizer(dump,"dump storage") + lua.registerfinalizer(dump,"dump storage") + +end -- to be tested with otf caching: @@ -115,31 +149,14 @@ statistics.register("stored bytecode data", function() local tofmodules = storage.tofmodules or 0 local tofdumps = storage.toftables or 0 if environment.initex then - local luautilities = utilities.lua - local nofstrippedbytes = luautilities.nofstrippedbytes - local nofstrippedchunks = luautilities.nofstrippedchunks - if nofstrippedbytes > 0 then - return format("%s modules, %s tables, %s chunks, %s chunks stripped (%s bytes)", - nofmodules, - nofdumps, - nofmodules + nofdumps, - nofstrippedchunks, - nofstrippedbytes - ) - elseif nofstrippedchunks > 0 then - return format("%s modules, %s tables, %s chunks, %s chunks stripped", - nofmodules, - nofdumps, - nofmodules + nofdumps, - nofstrippedchunks - ) - else - return format("%s modules, %s tables, %s chunks", - nofmodules, - nofdumps, - nofmodules + nofdumps - ) - end + local luautilities = utilities.lua + return format("%s modules, %s tables, %s chunks, %s chunks stripped (%s bytes)", + nofmodules, + nofdumps, + nofmodules + nofdumps, + luautilities.nofstrippedchunks or 0, + luautilities.nofstrippedbytes or 0 + ) else return format("%s modules (%0.3f sec), %s tables (%0.3f sec), %s chunks (%0.3f sec)", nofmodules, tofmodules, diff --git a/tex/context/base/lxml-ini.mkiv b/tex/context/base/lxml-ini.mkiv index cfa0114d0..239fe4ac0 100644 --- a/tex/context/base/lxml-ini.mkiv +++ b/tex/context/base/lxml-ini.mkiv @@ -58,6 +58,7 @@ \def\xmldirect #1{\ctxlxml{direct("#1")}} % in loops, not dt but root \def\xmlidx #1#2#3{\ctxlxml{idx("#1","#2",\number#3)}} \def\xmlinclude #1#2#3{\ctxlxml{include("#1","#2","#3",true)}} +\def\xmlsave #1#2{\ctxlxml{save("#1","#2")}} \def\xmlindex #1#2#3{\ctxlxml{index("#1","#2",\number#3)}} \def\xmlinfo #1{\hbox{\ttxx[\ctxlxml{info("#1")}]}} \def\xmlshow #1{\startpacked\ttx\xmlverbatim{#1}\stoppacked} diff --git a/tex/context/base/lxml-lpt.lua b/tex/context/base/lxml-lpt.lua index 51ab321b9..8567f2623 100644 --- a/tex/context/base/lxml-lpt.lua +++ b/tex/context/base/lxml-lpt.lua @@ -1039,37 +1039,6 @@ local function normal_apply(list,parsed,nofparsed,order) return collected end ---~ local function applylpath(list,pattern) ---~ -- we avoid an extra call ---~ local parsed = cache[pattern] ---~ if parsed then ---~ lpathcalls = lpathcalls + 1 ---~ lpathcached = lpathcached + 1 ---~ elseif type(pattern) == "table" then ---~ lpathcalls = lpathcalls + 1 ---~ parsed = pattern ---~ else ---~ parsed = lpath(pattern) or pattern ---~ end ---~ if not parsed then ---~ return ---~ end ---~ local nofparsed = #parsed ---~ if nofparsed == 0 then ---~ return -- something is wrong ---~ end ---~ local one = list[1] -- we could have a third argument: isroot and list or list[1] or whatever we like ... todo ---~ if not one then ---~ return -- something is wrong ---~ elseif not trace_lpath then ---~ return normal_apply(list,parsed,nofparsed,one.mi) ---~ elseif trace_lprofile then ---~ return profiled_apply(list,parsed,nofparsed,one.mi) ---~ else ---~ return traced_apply(list,parsed,nofparsed,one.mi) ---~ end ---~ end - local function applylpath(list,pattern) if not list then return @@ -1384,8 +1353,13 @@ function xml.elements(root,pattern,reverse) -- r, d, k local collected = applylpath(root,pattern) if not collected then return dummy - elseif reverse then - local c = #collected + 1 + end + local n = #collected + if n == 0 then + return dummy + end + if reverse then + local c = n + 1 return function() if c > 1 then c = c - 1 @@ -1395,7 +1369,7 @@ function xml.elements(root,pattern,reverse) -- r, d, k end end else - local n, c = #collected, 0 + local c = 0 return function() if c < n then c = c + 1 @@ -1411,8 +1385,13 @@ function xml.collected(root,pattern,reverse) -- e local collected = applylpath(root,pattern) if not collected then return dummy - elseif reverse then - local c = #collected + 1 + end + local n = #collected + if n == 0 then + return dummy + end + if reverse then + local c = n + 1 return function() if c > 1 then c = c - 1 @@ -1420,7 +1399,7 @@ function xml.collected(root,pattern,reverse) -- e end end else - local n, c = #collected, 0 + local c = 0 return function() if c < n then c = c + 1 @@ -1441,7 +1420,7 @@ end -- texy (see xfdf): -local function split(e) +local function split(e) -- todo: use helpers / lpeg local dt = e.dt if dt then for i=1,#dt do diff --git a/tex/context/base/lxml-tex.lua b/tex/context/base/lxml-tex.lua index 2cbdfc886..0503c511c 100644 --- a/tex/context/base/lxml-tex.lua +++ b/tex/context/base/lxml-tex.lua @@ -36,6 +36,7 @@ local xmlwithelements = xml.withelements local xmlserialize, xmlcollect, xmltext, xmltostring = xml.serialize, xml.collect, xml.text, xml.tostring local xmlapplylpath = xml.applylpath local xmlunprivatized, xmlprivatetoken, xmlprivatecodes = xml.unprivatized, xml.privatetoken, xml.privatecodes +local xmlstripelement = xml.stripelement local variables = (interfaces and interfaces.variables) or { } @@ -457,6 +458,10 @@ function lxml.include(id,pattern,attribute,recurse) stoptiming(xml) end +function lxml.save(id,name) + xml.save(getid(id),name) +end + function xml.getbuffer(name,compress,entities) -- we need to make sure that commands are processed if not name or name == "" then name = tex.jobname @@ -915,16 +920,18 @@ function lxml.setsetup(id,pattern,setup) end end end + elseif setup == "-" then + for c=1,nc do + collected[c].command = false + end + elseif setup == "+" then + for c=1,nc do + collected[c].command = true + end else for c=1,nc do local e = collected[c] - if setup == "-" then - e.command = false - elseif setup == "+" then - e.command = true - else - e.command = e.tg - end + e.command = e.tg end end elseif trace_setups then @@ -967,16 +974,18 @@ function lxml.setsetup(id,pattern,setup) end end end + elseif b == "-" then + for c=1,nc do + collected[c].command = false + end + elseif b == "+" then + for c=1,nc do + collected[c].command = true + end else for c=1,nc do local e = collected[c] - if b == "-" then - e.command = false - elseif b == "+" then - e.command = true - else - e.command = a .. e.tg - end + e.command = a .. e.tg end end elseif trace_setups then @@ -1186,7 +1195,7 @@ local function stripped(collected) -- tricky as we strip in place local nc = #collected if nc > 0 then for c=1,nc do - cprint(xml.stripelement(collected[c])) + cprint(xmlstripelement(collected[c])) end end end @@ -1311,10 +1320,11 @@ function texfinalizers.name(collected,n) c = collected[nc-n+1] end if c then - if c.ns == "" then + local ns = c.ns + if not ns or ns == "" then contextsprint(ctxcatcodes,c.tg) else - contextsprint(ctxcatcodes,c.ns,":",c.tg) + contextsprint(ctxcatcodes,ns,":",c.tg) end end end @@ -1327,11 +1337,11 @@ function texfinalizers.tags(collected,nonamespace) if nc > 0 then for c=1,nc do local e = collected[c] - local ns, tg = e.ns, e.tg - if nonamespace or ns == "" then - contextsprint(ctxcatcodes,tg) + local ns = e.ns + if nonamespace or (not ns or ns == "") then + contextsprint(ctxcatcodes,e.tg) else - contextsprint(ctxcatcodes,ns,":",tg) + contextsprint(ctxcatcodes,ns,":",e.tg) end end end @@ -1341,11 +1351,10 @@ end -- local function verbatim(id,before,after) - local root = getid(id) - if root then - if before then contextsprint(ctxcatcodes,before,"[",root.tg or "?","]") end - lxml.toverbatim(xmltostring(root.dt)) ---~ lxml.toverbatim(xml.totext(root.dt)) + local e = getid(id) + if e then + if before then contextsprint(ctxcatcodes,before,"[",e.tg or "?","]") end + lxml.toverbatim(xmltostring(e.dt)) -- lxml.toverbatim(xml.totext(e.dt)) if after then contextsprint(ctxcatcodes,after) end end end @@ -1451,66 +1460,112 @@ end lxml.index = lxml.position function lxml.pos(id) - local root = getid(id) - contextsprint(ctxcatcodes,(root and root.ni) or 0) -end + local e = getid(id) + contextsprint(ctxcatcodes,e and e.ni or 0) +end + +-- function lxml.att(id,a,default) +-- local root = getid(id) +-- if root then +-- local at = root.at +-- local str = (at and at[a]) or default +-- if str and str ~= "" then +-- contextsprint(notcatcodes,str) +-- end +-- elseif default then +-- contextsprint(notcatcodes,default) +-- end +-- end +-- +-- no need for an assignment so: function lxml.att(id,a,default) - local root = getid(id) - if root then - local at = root.at - local str = (at and at[a]) or default - if str and str ~= "" then - contextsprint(notcatcodes,str) + local e = getid(id) + if e then + local at = e.at + if at then + -- normally always true + local str = at[a] + if not str then + if default and default ~= "" then + contextsprint(notcatcodes,default) + end + elseif str ~= "" then + contextsprint(notcatcodes,str) + end + elseif default and default ~= "" then + contextsprint(notcatcodes,default) end - elseif default then + elseif default and default ~= "" then contextsprint(notcatcodes,default) end end function lxml.name(id) -- or remapped name? -> lxml.info, combine - local r = getid(id) - local ns = r.rn or r.ns or "" - if ns ~= "" then - contextsprint(ctxcatcodes,ns,":",r.tg) - else - contextsprint(ctxcatcodes,r.tg) + local e = getid(id) + if e then + local ns = e.rn or e.ns + if ns and ns ~= "" then + contextsprint(ctxcatcodes,ns,":",e.tg) + else + contextsprint(ctxcatcodes,e.tg) + end end end function lxml.match(id) -- or remapped name? -> lxml.info, combine - contextsprint(ctxcatcodes,getid(id).mi or 0) + local e = getid(id) + contextsprint(ctxcatcodes,e and e.mi or 0) end function lxml.tag(id) -- tag vs name -> also in l-xml tag->name - contextsprint(ctxcatcodes,getid(id).tg or "") + local e = getid(id) + if e then + local tg = e.tg + if tg and tg ~= "" then + contextsprint(ctxcatcodes,tg) + end + end end function lxml.namespace(id) -- or remapped name? - local root = getid(id) - contextsprint(ctxcatcodes,root.rn or root.ns or "") + local e = getid(id) + if e then + local ns = e.rn or e.ns + if ns and ns ~= "" then + contextsprint(ctxcatcodes,ns) + end + end end function lxml.flush(id) - id = getid(id) - local dt = id and id.dt - if dt then - xmlsprint(dt) + local e = getid(id) + if e then + local dt = e.dt + if dt then + xmlsprint(dt) + end end end function lxml.snippet(id,i) local e = getid(id) if e then - local edt = e.dt - if edt then - xmlsprint(edt[i]) + local dt = e.dt + if dt then + local dti = dt[i] + if dti then + xmlsprint(dti) + end end end end function lxml.direct(id) - xmlsprint(getid(id)) + local e = getid(id) + if e then + xmlsprint(e) + end end function lxml.command(id,pattern,cmd) diff --git a/tex/context/base/m-scite.mkiv b/tex/context/base/m-scite.mkiv new file mode 100644 index 000000000..aed2c2631 --- /dev/null +++ b/tex/context/base/m-scite.mkiv @@ -0,0 +1,269 @@ +%D \module +%D [ file=m-scite, +%D version=2014.04.28, +%D title=\CONTEXT\ Extra Modules, +%D subtitle=\SCITE\ lexers, +%D author=Hans Hagen, +%D date=\currentdate, +%D copyright={PRAGMA ADE \& \CONTEXT\ Development Team}] +%C +%C This module is part of the \CONTEXT\ macro||package and is +%C therefore copyrighted by \PRAGMA. See mreadme.pdf for +%C details. + +% We can simplify the scite lexers, as long as we're able to return the +% lexed result table and provide alexer module with the functions that +% the lexer expects (so I need to decipher the cxx file). +% +% lexer._TOKENSTYLES : table +% lexer._CHILDREN : flag +% lexer._EXTRASTYLES : table +% lexer._GRAMMAR : flag +% +% lexers.load : function +% lexers.lex : function +% +% And some properties that map styles onto scintilla styling. I get the +% impression that we end up with something simpler, a hybrid between the +% scite lexing and the current context way, so we get an intermediate +% step, with some penalty for context, but at least I don't have to +% maintain two sets (three sets as we also have a line based series). + +% TODO: as these files are in tds we can locate them and set the lexer root +% to that one. Currently we're on: we're on context/documents. + +% This is an experiment: eventually we need to hook it into the verbatim code +% and deal with widow lines and so. + +\startluacode + +-- todo: merge with collapse +-- todo: prehash whitespaces + +-- todo: hook into the pretty print code +-- todo: a simple catcode regime with only \ { } + +local gsub, sub, find = string.gsub, string.sub, string.find +local concat = table.concat +local formatters = string.formatters +local lpegmatch = lpeg.match +local setmetatableindex = table.setmetatableindex + +local scite = require("util-sci") +buffers.scite = scite + +-- context output: + +local f_def_color = formatters["\\definecolor[slxc%s][h=%s%s%s]%%"] +local f_fore_none = formatters["\\def\\slx%s#1{{\\slxc%s#1}}%%"] +local f_fore_bold = formatters["\\def\\slx%s#1{{\\slxc%s\\bf#1}}%%"] +local f_none_bold = formatters["\\def\\slx%s#1{{\\bf#1}}%%"] +local f_none_none = formatters["\\def\\slx%s#1{{#1}}%%"] +local f_texstyled = formatters["\\slx%s{%s}"] + +local f_mapping = [[ +\let\string\slxL\string\letterleftbrace +\let\string\slxR\string\letterrightbrace +\let\string\slxM\string\letterdollar +\let\string\slxV\string\letterbar +\let\string\slxH\string\letterhash +\let\string\slxB\string\letterbackslash +\let\string\slxP\string\letterpercent +\let\string\slxS\string\fixedspace +%]] + +local replacer = lpeg.replacer { + ["{"] = "\\slxL ", + ["}"] = "\\slxR ", + ["$"] = "\\slxM ", + ["|"] = "\\slxV ", + ["#"] = "\\slxH ", + ["\\"] = "\\slxB ", + ["%"] = "\\slxP ", + [" "] = "\\slxS ", +} + +local colors = nil + +local function exportcolors() + if not colors then + scite.loadscitelexer() + local function black(f) + return (f[1] == f[2]) and (f[2] == f[3]) and (f[3] == '00') + end + local result, r = { f_mapping }, 1 + for k, v in table.sortedhash(lexer.context.styles) do + local fore = v.fore + if fore and not black(fore) then + r = r + 1 + result[r] = f_def_color(k,fore[1],fore[2],fore[3]) + end + end + r = r + 1 + result[r] = "%" + for k, v in table.sortedhash(lexer.context.styles) do + local bold = v.bold + local fore = v.fore + r = r + 1 + if fore and not black(fore) then + if bold then + result[r] = f_fore_bold(k,k) + else + result[r] = f_fore_none(k,k) + end + else + if bold then + result[r] = f_none_bold(k) + else + result[r] = f_none_none(k) + end + end + end + colors = concat(result,"\n") + end + return colors +end + +local function exportwhites() + return setmetatableindex(function(t,k) + local v = find(k,"white") and true or false + t[k] = v + return v + end) +end + +local function exportstyled(lexer,text) + local result = lexer.lex(lexer,text,0) + local start = 1 + local whites = exportwhites() + local buffer = { } + for i=1,#result,2 do + local style = result[i] + local position = result[i+1] + local txt = sub(text,start,position-1) + txt = lpegmatch(replacer,txt) + if whites[style] then + buffer[#buffer+1] = txt + else + buffer[#buffer+1] = f_texstyled(style,txt) + end + start = position + end + buffer = concat(buffer) + return buffer +end + +function scite.installcommands() + context(exportcolors()) +end + +local function lexdata(data,lexname) + buffers.assign("lex",exportstyled(scite.loadedlexers[lexname],data or "")) +end + +scite.lexdata = lexdata + +function scite.lexbuffer(name,lexname) + lexdata(buffers.getcontent(name) or "",lexname or "tex") +end + +function scite.lexfile(filename,lexname) + lexdata(io.loaddata(filename) or "",lexname or file.suffix(filename)) +end + +-- html output + +\stopluacode + +% This is a preliminary interface. + +\unprotect + +\unexpanded\def\installscitecommands + {\ctxlua{buffers.scite.installcommands()}% + \let\installscitecommands\relax} + +\unexpanded\def\startscite{\startlines} +\unexpanded\def\stopscite {\stoplines} + +\unexpanded\def\scitefile + {\dosingleargument\module_scite_file} + +\unexpanded\def\module_scite_file[#1]% + {\start + \ctxlua{buffers.scite.lexfile("#1")}% + \installscitecommands + \tt + \dontcomplain + \startscite + \getbuffer[lex]% + \stopscite + \stop} + +\unexpanded\def\scitebuffer + {\dodoubleargument\module_scite_buffer} + +\unexpanded\def\module_scite_buffer[#1][#2]% + {\start + \ifsecondargument + \ctxlua{buffers.scite.lexbuffer("#2","#1")}% + \else + \ctxlua{buffers.scite.lexbuffer("#1","tex")}% + \fi + \installscitecommands + \tt + \dontcomplain + \startscite + \getbuffer[lex]% + \stopscite + \stop} + +\protect + +\continueifinputfile{m-scite.mkiv} + +\setupbodyfont[dejavu,8pt] + +\setuplayout + [width=middle, + height=middle, + header=1cm, + footer=1cm, + topspace=1cm, + bottomspace=1cm, + backspace=1cm] + +\startbuffer[demo] +\startsubsubject[title={oeps}] + +\startMPcode + draw fullcircle + scaled 2cm + withpen pencircle scaled 1mm + withcolor .5green; + draw textext ( + lua ( + "local function f(s) return string.upper(s) end mp.quoted(f('foo'))" + ) + ) withcolor .5red ; +\stopMPcode + +\startluacode + context("foo") +\stopluacode + +\stopsubsubject +\stopbuffer + +\starttext + +% \scitefile[../lexers/scite-context-lexer.lua] \page +% \scitefile[t:/manuals/about/about-metafun.tex] \page +% \scitefile[t:/sources/strc-sec.mkiv] \page +% \scitefile[e:/tmp/mp.w] \page +% \scitefile[t:/manuals/hybrid/tugboat.bib] \page +\scitefile[e:/tmp/test.bib] \page + +% \getbuffer[demo] \scitebuffer[demo] + +\stoptext diff --git a/tex/context/base/m-spreadsheet.lua b/tex/context/base/m-spreadsheet.lua index f329acf9a..1b3c5cb34 100644 --- a/tex/context/base/m-spreadsheet.lua +++ b/tex/context/base/m-spreadsheet.lua @@ -129,10 +129,10 @@ function datacell(a,b,...) end local function checktemplate(s) - if find(s,"%%") then + if find(s,"%",1,true) then -- normal template return s - elseif find(s,"@") then + elseif find(s,"@",1,true) then -- tex specific template return gsub(s,"@","%%") else diff --git a/tex/context/base/math-dir.lua b/tex/context/base/math-dir.lua index 525d07831..bcc5461e9 100644 --- a/tex/context/base/math-dir.lua +++ b/tex/context/base/math-dir.lua @@ -33,6 +33,7 @@ local getid = nuts.getid local getlist = nuts.getlist local setfield = nuts.setfield local getattr = nuts.getattr +local setattr = nuts.setattr local insert_node_before = nuts.insert_before local insert_node_after = nuts.insert_after diff --git a/tex/context/base/math-fbk.lua b/tex/context/base/math-fbk.lua index f4bd1348a..70a8ae8d6 100644 --- a/tex/context/base/math-fbk.lua +++ b/tex/context/base/math-fbk.lua @@ -180,12 +180,12 @@ end -- virtualcharacters[0x208B] = 0x002B virtualcharacters[0x207A] = function(data) - data.replacement = 0x2212 + data.replacement = 0x002B return raised(data) end virtualcharacters[0x207B] = function(data) - data.replacement = 0x002B + data.replacement = 0x2212 return raised(data) end @@ -512,7 +512,7 @@ addextra(0xFE940, { category = "mn", description="SMALL ANNUITY SYMBOL", unicode local function actuarian(data) local characters = data.target.characters local parameters = data.target.parameters - local basechar = characters[0x0078] -- x (0x0058 X) + local basechar = characters[0x0078] -- x (0x0058 X) or 0x1D431 local linewidth = parameters.xheight / 10 local basewidth = basechar.width local baseheight = basechar.height diff --git a/tex/context/base/math-fen.mkiv b/tex/context/base/math-fen.mkiv index fe959cc1e..33afbf675 100644 --- a/tex/context/base/math-fen.mkiv +++ b/tex/context/base/math-fen.mkiv @@ -235,6 +235,7 @@ \expandafter\let\csname\??mathright\meaning ⟫\endcsname\Rdoubleangle \expandafter\let\csname\??mathright\meaning }\endcsname\Rbrace \expandafter\let\csname\??mathright\meaning |\endcsname\Rbar +\expandafter\let\csname\??mathright\meaning ‖\endcsname\Rdoublebar \expandafter\let\csname\??mathright\meaning ⦀\endcsname\Rtriplebar \expandafter\let\csname\??mathright\meaning /\endcsname\Rsolidus \expandafter\let\csname\??mathright\meaning .\endcsname\Rnothing diff --git a/tex/context/base/math-frc.mkiv b/tex/context/base/math-frc.mkiv index 65fa30942..f4f3f2b84 100644 --- a/tex/context/base/math-frc.mkiv +++ b/tex/context/base/math-frc.mkiv @@ -274,7 +274,7 @@ %D \getbuffer \unexpanded\def\cfrac - {\doifnextoptionalelse\math_cfrac_yes\math_cfrac_nop} + {\doifnextoptionalcselse\math_cfrac_yes\math_cfrac_nop} \def\math_cfrac_nop {\math_cfrac_indeed[cc]} \def\math_cfrac_yes[#1]{\math_cfrac_indeed[#1cc]} diff --git a/tex/context/base/math-ini.lua b/tex/context/base/math-ini.lua index 1351559a0..9772ce538 100644 --- a/tex/context/base/math-ini.lua +++ b/tex/context/base/math-ini.lua @@ -22,8 +22,8 @@ local floor = math.floor local context = context local commands = commands -local contextsprint = context.sprint -local contextfprint = context.fprint -- a bit inefficient +local context_sprint = context.sprint +----- context_fprint = context.fprint -- a bit inefficient local trace_defining = false trackers.register("math.defining", function(v) trace_defining = v end) @@ -213,28 +213,28 @@ local f_char = formatters[ [[\Umathchardef\%s "%X "%X "%X ]] ] local setmathsymbol = function(name,class,family,slot) -- hex is nicer for tracing if class == classes.accent then - contextsprint(f_accent(name,family,slot)) + context_sprint(f_accent(name,family,slot)) elseif class == classes.topaccent then - contextsprint(f_topaccent(name,family,slot)) + context_sprint(f_topaccent(name,family,slot)) elseif class == classes.botaccent then - contextsprint(f_botaccent(name,family,slot)) + context_sprint(f_botaccent(name,family,slot)) elseif class == classes.over then - contextsprint(f_over(name,family,slot)) + context_sprint(f_over(name,family,slot)) elseif class == classes.under then - contextsprint(f_under(name,family,slot)) + context_sprint(f_under(name,family,slot)) elseif class == open_class or class == close_class or class == middle_class then setdelcode("global",slot,{family,slot,0,0}) - contextsprint(f_fence(name,class,family,slot)) + context_sprint(f_fence(name,class,family,slot)) elseif class == classes.delimiter then setdelcode("global",slot,{family,slot,0,0}) - contextsprint(f_delimiter(name,family,slot)) + context_sprint(f_delimiter(name,family,slot)) elseif class == classes.radical then - contextsprint(f_radical(name,family,slot)) + context_sprint(f_radical(name,family,slot)) elseif class == classes.root then - contextsprint(f_root(name,family,slot)) + context_sprint(f_root(name,family,slot)) else -- beware, open/close and other specials should not end up here - contextsprint(f_char(name,class,family,slot)) + context_sprint(f_char(name,class,family,slot)) end end diff --git a/tex/context/base/math-ini.mkiv b/tex/context/base/math-ini.mkiv index bf9f5278c..dcd2a5c33 100644 --- a/tex/context/base/math-ini.mkiv +++ b/tex/context/base/math-ini.mkiv @@ -117,7 +117,7 @@ \installswitchcommandhandler \??mathematics {mathematics} \??mathematics \unexpanded\def\startmathematics % no grouping, if ever then also an optional second - {\doifnextoptionalelse\math_mathematics_start_yes\math_mathematics_start_nop} + {\doifnextoptionalcselse\math_mathematics_start_yes\math_mathematics_start_nop} \unexpanded\def\math_mathematics_start_yes[#1]% {\pushmacro\currentmathematics @@ -366,10 +366,39 @@ %D Let's define a few comands here: -\definemathcommand [mathstrut] {\vphantom{(}} +%definemathcommand [mathstrut] {\vphantom{(}} %definemathcommand [joinrel] {\mathrel{\mkern-3mu}} \definemathcommand [joinrel] [rel] {\mkern-3mu} +\chardef\c_math_strut"28 + +\unexpanded\def\math_strut_htdp#1% + {\s!height\fontcharht#1\c_math_strut + \s!depth \fontchardp#1\c_math_strut} + +\unexpanded\def\math_strut_normal + {\vrule + \normalexpanded{\math_strut_htdp{\mathstylefont\normalmathstyle}}% + \s!width \zeropoint + \relax} + +\unexpanded\def\math_strut_visual + {\hskip-.01\emwidth + \vrule + \normalexpanded{\math_strut_htdp{\mathstylefont\normalmathstyle}}% + \s!width .02\emwidth + \relax + \hskip-.01\emwidth} + +\unexpanded\def\showmathstruts % let's not overload \nath_strut_normal + {\let\math_strut\math_strut_visual} + +\let\math_strut\math_strut_normal + +% \unexpanded\def\mathstrut{\mathcodecommand{nothing}{\math_strut}} + +\definemathcommand [mathstrut] {\math_strut} + %D We could have a arg variant \unknown\ but not now. \unexpanded\def\mathopwithlimits#1#2{\mathop{#1{#2}}\limits} @@ -1267,7 +1296,9 @@ %D %D \typebuffer \getbuffer -\unexpanded\def\mathstylehbox#1% +% to be tested: {#1} but it could have side effects + +\unexpanded\def\mathstylehbox#1% sensitive for: a \over b => {a\over b} or \frac{a}{b} {\normalexpanded{\hbox\bgroup \startimath\triggermathstyle\normalmathstyle}\mathsurround\zeropoint#1\stopimath\egroup} diff --git a/tex/context/base/math-noa.lua b/tex/context/base/math-noa.lua index 4e25fe206..a7f0fcf55 100644 --- a/tex/context/base/math-noa.lua +++ b/tex/context/base/math-noa.lua @@ -60,15 +60,14 @@ local tonut = nuts.tonut local nutstring = nuts.tostring local getfield = nuts.getfield +local setfield = nuts.setfield local getnext = nuts.getnext local getprev = nuts.getprev local getid = nuts.getid -local getattr = nuts.getattr local getfont = nuts.getfont local getsubtype = nuts.getsubtype local getchar = nuts.getchar - -local setfield = nuts.setfield +local getattr = nuts.getattr local setattr = nuts.setattr local insert_node_after = nuts.insert_after diff --git a/tex/context/base/math-rad.mkvi b/tex/context/base/math-rad.mkvi index c6053071e..027b5c27d 100644 --- a/tex/context/base/math-rad.mkvi +++ b/tex/context/base/math-rad.mkvi @@ -28,7 +28,7 @@ \def\root#1\of{\rootradical{#1}} % #2 -\unexpanded\def\sqrt{\doifnextoptionalelse\rootwithdegree\rootwithoutdegree} +\unexpanded\def\sqrt{\doifnextoptionalcselse\rootwithdegree\rootwithoutdegree} \def\styledrootradical#1#2% so that \text works ok ... \rootradical behaves somewhat weird {\normalexpanded{\rootradical{\normalunexpanded{#1}}{\noexpand\triggermathstyle{\normalmathstyle}\normalunexpanded{#2}}}} @@ -62,7 +62,7 @@ \unexpanded\def\math_radical_handle#tag% {\begingroup \edef\currentmathradical{#tag}% - \doifnextoptionalelse\math_radical_degree_yes\math_radical_degree_nop} + \doifnextoptionalcselse\math_radical_degree_yes\math_radical_degree_nop} \def\math_radical_alternative{\csname\??mathradicalalternative\mathradicalparameter\c!alternative\endcsname} @@ -74,8 +74,8 @@ \def\math_radical_indeed#body% {\math_radical_alternative{#body}\endgroup} -\setvalue{\??mathradicalalternative\v!default}% #1% - {\rootradical{\currentmathradicaldegree}} +\setvalue{\??mathradicalalternative\v!default}% #body% + {\rootradical{\currentmathradicaldegree}} % {#body}} \setvalue{\??mathradicalalternative\v!normal}#body% {\edef\p_color{\mathradicalparameter\c!color}% diff --git a/tex/context/base/meta-fnt.lua b/tex/context/base/meta-fnt.lua index cf47f0c92..596d0f456 100644 --- a/tex/context/base/meta-fnt.lua +++ b/tex/context/base/meta-fnt.lua @@ -29,7 +29,7 @@ local characters, descriptions = { }, { } local factor, code, slot, width, height, depth, total, variants, bbox, llx, lly, urx, ury = 100, { }, 0, 0, 0, 0, 0, 0, true, 0, 0, 0, 0 -- The next variant of ActualText is what Taco and I could come up with --- eventually. As of September 2013 Acrobat copies okay, Summatra copies a +-- eventually. As of September 2013 Acrobat copies okay, Sumatra copies a -- question mark, pdftotext injects an extra space and Okular adds a -- newline plus space. @@ -79,7 +79,7 @@ local flusher = { if inline then characters[slot] = { commands = { - { "special", "pdf: " .. topdf(slot,code) }, + { "special", "pdf:" .. topdf(slot,code) }, } } else diff --git a/tex/context/base/meta-ini.mkiv b/tex/context/base/meta-ini.mkiv index 28ba9e901..281143e40 100644 --- a/tex/context/base/meta-ini.mkiv +++ b/tex/context/base/meta-ini.mkiv @@ -264,7 +264,10 @@ \ifx\p_setups\empty \else \setups[\p_setups]% \fi - \useMPinstancestyleandcolor\c!textstyle\c!textcolor} + \useMPinstancestyleparameter\c!textstyle} + +\def\meta_set_current_color + {\useMPinstancecolorparameter\c!textcolor} \def\meta_stop_current_graphic {\global\t_meta_definitions\emptytoks diff --git a/tex/context/base/meta-pdf.lua b/tex/context/base/meta-pdf.lua index 46e20ad31..512384450 100644 --- a/tex/context/base/meta-pdf.lua +++ b/tex/context/base/meta-pdf.lua @@ -38,8 +38,8 @@ local mptopdf = metapost.mptopdf mptopdf.nofconverted = 0 -local f_translate = formatters["1 0 0 0 1 %f %f cm"] -- no %s due to 1e-035 issues -local f_concat = formatters["%f %f %f %f %f %f cm"] -- no %s due to 1e-035 issues +local f_translate = formatters["1 0 0 0 1 %F %F cm"] -- no %s due to 1e-035 issues +local f_concat = formatters["%F %F %F %F %F %F cm"] -- no %s due to 1e-035 issues local m_path, m_stack, m_texts, m_version, m_date, m_shortcuts = { }, { }, { }, 0, 0, false diff --git a/tex/context/base/meta-tex.mkiv b/tex/context/base/meta-tex.mkiv index deac883c8..e7ed59727 100644 --- a/tex/context/base/meta-tex.mkiv +++ b/tex/context/base/meta-tex.mkiv @@ -28,7 +28,7 @@ \let\stopTeXtexts\relax -\def\TeXtext +\unexpanded\def\TeXtext {\dosingleempty\meta_textext} \def\meta_textext[#1]#2#3% contrary to mkii we don't process yet but we do expand @@ -68,7 +68,7 @@ \unexpanded\def\definetextext[#1]% {\def\currenttextext{#1}% - \doifnextoptionalelse\meta_textext_define_one\meta_textext_define_zero} + \doifnextoptionalcselse\meta_textext_define_one\meta_textext_define_zero} \def\meta_textext_define_one {\setvalue{\??graphictexarguments1:\currenttextext}} \def\meta_textext_define_zero{\setvalue{\??graphictexarguments0:\currenttextext}} @@ -79,7 +79,7 @@ {textext.drt("\mpsometxt#1{\ctxlua{metapost.escaped(\!!bs#2\!!es)}}")} \unexpanded\def\mpsometxt % no _ catcode - {\doifnextoptionalelse\meta_some_txt_indeed_yes\meta_some_txt_indeed_nop} + {\doifnextoptionalcselse\meta_some_txt_indeed_yes\meta_some_txt_indeed_nop} \def\meta_some_txt_indeed_yes[#1]% {\def\currenttextext{#1}% diff --git a/tex/context/base/mlib-ctx.lua b/tex/context/base/mlib-ctx.lua index a1a4e645a..fe5218771 100644 --- a/tex/context/base/mlib-ctx.lua +++ b/tex/context/base/mlib-ctx.lua @@ -146,9 +146,11 @@ statistics.register("metapost processing time", function() local nofconverted = metapost.makempy.nofconverted local elapsedtime = statistics.elapsedtime local elapsed = statistics.elapsed - local str = format("%s seconds, loading: %s, execution: %s, n: %s, average: %s", + local instances, memory = metapost.getstatistics(true) + local str = format("%s seconds, loading: %s, execution: %s, n: %s, average: %s, instances: %i, memory: %0.3f M", elapsedtime(metapost), elapsedtime(mplib), elapsedtime(metapost.exectime), n, - elapsedtime((elapsed(metapost) + elapsed(mplib) + elapsed(metapost.exectime)) / n)) + elapsedtime((elapsed(metapost) + elapsed(mplib) + elapsed(metapost.exectime)) / n), + instances, memory/(1024*1024)) if nofconverted > 0 then return format("%s, external: %s (%s calls)", str, elapsedtime(metapost.makempy), nofconverted) diff --git a/tex/context/base/mlib-ctx.mkiv b/tex/context/base/mlib-ctx.mkiv index 75ff45488..e4c1cb6fe 100644 --- a/tex/context/base/mlib-ctx.mkiv +++ b/tex/context/base/mlib-ctx.mkiv @@ -18,6 +18,7 @@ \registerctxluafile{mlib-run}{1.001} \registerctxluafile{mlib-ctx}{1.001} +\registerctxluafile{mlib-lua}{1.001} \unprotect diff --git a/tex/context/base/mlib-lua.lua b/tex/context/base/mlib-lua.lua new file mode 100644 index 000000000..9c7a2e43a --- /dev/null +++ b/tex/context/base/mlib-lua.lua @@ -0,0 +1,185 @@ +if not modules then modules = { } end modules ['mlib-pdf'] = { + version = 1.001, + comment = "companion to mlib-ctx.mkiv", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files", +} + +-- This is very preliminary code! + +local type, tostring, select, loadstring = type, tostring, select, loadstring +local formatters = string.formatters +local find, gsub = string.find, string.gsub +local concat = table.concat +local lpegmatch = lpeg.match + +local report_luarun = logs.reporter("metapost","lua") + +local trace_luarun = false trackers.register("metapost.lua",function(v) trace_luarun = v end) +local trace_enabled = true + +mp = mp or { } -- system namespace +MP = MP or { } -- user namespace + +local buffer, n, max = { }, 0, 10 -- we reuse upto max + +function mp._f_() + if trace_enabled and trace_luarun then + local result = concat(buffer," ",1,n) + if n > max then + buffer = { } + end + n = 0 + report_luarun("data: %s",result) + return result + else + if n == 0 then + return "" + end + local result + if n == 1 then + result = buffer[1] + else + result = concat(buffer," ",1,n) + end + if n > max then + buffer = { } + end + n = 0 + return result + end +end + +local f_pair = formatters["(%s,%s)"] +local f_triplet = formatters["(%s,%s,%s)"] +local f_quadruple = formatters["(%s,%s,%s,%s)"] + +function mp.print(...) + for i=1,select("#",...) do + n = n + 1 + buffer[n] = tostring((select(i,...))) + end +end + +function mp.pair(x,y) + n = n + 1 + if type(x) == "table" then + buffer[n] = f_pair(x[1],x[2]) + else + buffer[n] = f_pair(x,y) + end +end + +function mp.triplet(x,y,z) + n = n + 1 + if type(x) == "table" then + buffer[n] = f_triplet(x[1],x[2],x[3]) + else + buffer[n] = f_triplet(x,y,z) + end +end + +function mp.quadruple(w,x,y,z) + n = n + 1 + if type(w) == "table" then + buffer[n] = f_quadruple(w[1],w[2],w[3],w[4]) + else + buffer[n] = f_quadruple(w,x,y,z) + end +end + +local replacer = lpeg.replacer("@","%%") + +function mp.format(fmt,...) + n = n + 1 + if not find(fmt,"%%") then + fmt = lpegmatch(replacer,fmt) + end + buffer[n] = formatters[fmt](...) +end + +function mp.quoted(fmt,s,...) + n = n + 1 + if s then + if not find(fmt,"%%") then + fmt = lpegmatch(replacer,fmt) + end + buffer[n] = '"' .. formatters[fmt](s,...) .. '"' + else + buffer[n] = '"' .. fmt .. '"' + end +end + +local f_code = formatters["%s return mp._f_()"] + +function metapost.runscript(code) + local f = loadstring(f_code(code)) + if f then + return tostring(f()) + else + return "" + end +end + +local cache, n = { }, 0 -- todo: when > n then reset cache or make weak + +function metapost.runscript(code) + if trace_enabled and trace_luarun then + report_luarun("code: %s",code) + end + if n > 100 then + cache = nil -- forget about caching + local f = loadstring(f_code(code)) + if f then + return tostring(f()) + else + return "" + end + else + local f = cache[code] + if f then + return tostring(f()) + else + f = loadstring(f_code(code)) + if f then + n = n + 1 + cache[code] = f + return tostring(f()) + else + return "" + end + end + end +end + +-- function metapost.initializescriptrunner(mpx) +-- mp.numeric = function(s) return mpx:get_numeric(s) end +-- mp.string = function(s) return mpx:get_string (s) end +-- mp.boolean = function(s) return mpx:get_boolean(s) end +-- mp.number = mp.numeric +-- end + +local get_numeric = mplib.get_numeric +local get_string = mplib.get_string +local get_boolean = mplib.get_boolean +local get_number = get_numeric + +-- function metapost.initializescriptrunner(mpx) +-- mp.numeric = function(s) return get_numeric(mpx,s) end +-- mp.string = function(s) return get_string (mpx,s) end +-- mp.boolean = function(s) return get_boolean(mpx,s) end +-- mp.number = mp.numeric +-- end + +local currentmpx = nil + +mp.numeric = function(s) return get_numeric(currentmpx,s) end +mp.string = function(s) return get_string (currentmpx,s) end +mp.boolean = function(s) return get_boolean(currentmpx,s) end +mp.number = mp.numeric + +function metapost.initializescriptrunner(mpx,trialrun) + currentmpx = mpx + trace_enabled = not trialrun +end diff --git a/tex/context/base/mlib-pdf.lua b/tex/context/base/mlib-pdf.lua index 6bb08bd1d..d25dde884 100644 --- a/tex/context/base/mlib-pdf.lua +++ b/tex/context/base/mlib-pdf.lua @@ -41,6 +41,18 @@ local experiment = true -- uses context(node) that already does delayed nod local savedliterals = nil -- needs checking local mpsliteral = nodes.pool.register(node.new("whatsit",nodes.whatsitcodes.pdfliteral)) -- pdfliteral.mode = 1 +local f_f = formatters["%F"] + +local f_m = formatters["%F %F m"] +local f_c = formatters["%F %F %F %F %F %F c"] +local f_l = formatters["%F %F l"] +local f_cm = formatters["%F %F %F %F %F %F cm"] +local f_M = formatters["%F M"] +local f_j = formatters["%i j"] +local f_J = formatters["%i J"] +local f_d = formatters["[%s] %F d"] +local f_w = formatters["%F w"] + local pdfliteral = function(s) local literal = copy_node(mpsliteral) literal.data = s @@ -119,7 +131,7 @@ end function pdfflusher.startfigure(n,llx,lly,urx,ury,message) savedliterals = nil metapost.n = metapost.n + 1 - context.startMPLIBtoPDF(llx,lly,urx,ury) + context.startMPLIBtoPDF(f_f(llx),f_f(lly),f_f(urx),f_f(ury)) if message then pdfflusher.comment(message) end end @@ -192,11 +204,11 @@ local function flushnormalpath(path, t, open) nt = nt + 1 pth = path[i] if not ith then - t[nt] = formatters["%f %f m"](pth.x_coord,pth.y_coord) + t[nt] = f_m(pth.x_coord,pth.y_coord) elseif curved(ith,pth) then - t[nt] = formatters["%f %f %f %f %f %f c"](ith.right_x,ith.right_y,pth.left_x,pth.left_y,pth.x_coord,pth.y_coord) + t[nt] = f_c(ith.right_x,ith.right_y,pth.left_x,pth.left_y,pth.x_coord,pth.y_coord) else - t[nt] = formatters["%f %f l"](pth.x_coord,pth.y_coord) + t[nt] = f_l(pth.x_coord,pth.y_coord) end ith = pth end @@ -204,15 +216,15 @@ local function flushnormalpath(path, t, open) nt = nt + 1 local one = path[1] if curved(pth,one) then - t[nt] = formatters["%f %f %f %f %f %f c"](pth.right_x,pth.right_y,one.left_x,one.left_y,one.x_coord,one.y_coord ) + t[nt] = f_c(pth.right_x,pth.right_y,one.left_x,one.left_y,one.x_coord,one.y_coord ) else - t[nt] = formatters["%f %f l"](one.x_coord,one.y_coord) + t[nt] = f_l(one.x_coord,one.y_coord) end elseif #path == 1 then -- special case .. draw point local one = path[1] nt = nt + 1 - t[nt] = formatters["%f %f l"](one.x_coord,one.y_coord) + t[nt] = f_l(one.x_coord,one.y_coord) end return t end @@ -226,18 +238,18 @@ local function flushconcatpath(path, t, open) nt = 0 end nt = nt + 1 - t[nt] = formatters["%f %f %f %f %f %f cm"](sx,rx,ry,sy,tx,ty) + t[nt] = f_cm(sx,rx,ry,sy,tx,ty) for i=1,#path do nt = nt + 1 pth = path[i] if not ith then - t[nt] = formatters["%f %f m"](mpconcat(pth.x_coord,pth.y_coord)) + t[nt] = f_m(mpconcat(pth.x_coord,pth.y_coord)) elseif curved(ith,pth) then local a, b = mpconcat(ith.right_x,ith.right_y) local c, d = mpconcat(pth.left_x,pth.left_y) - t[nt] = formatters["%f %f %f %f %f %f c"](a,b,c,d,mpconcat(pth.x_coord,pth.y_coord)) + t[nt] = f_c(a,b,c,d,mpconcat(pth.x_coord,pth.y_coord)) else - t[nt] = formatters["%f %f l"](mpconcat(pth.x_coord, pth.y_coord)) + t[nt] = f_l(mpconcat(pth.x_coord, pth.y_coord)) end ith = pth end @@ -247,15 +259,15 @@ local function flushconcatpath(path, t, open) if curved(pth,one) then local a, b = mpconcat(pth.right_x,pth.right_y) local c, d = mpconcat(one.left_x,one.left_y) - t[nt] = formatters["%f %f %f %f %f %f c"](a,b,c,d,mpconcat(one.x_coord, one.y_coord)) + t[nt] = f_c(a,b,c,d,mpconcat(one.x_coord, one.y_coord)) else - t[nt] = formatters["%f %f l"](mpconcat(one.x_coord,one.y_coord)) + t[nt] = f_l(mpconcat(one.x_coord,one.y_coord)) end elseif #path == 1 then -- special case .. draw point nt = nt + 1 local one = path[1] - t[nt] = formatters["%f %f l"](mpconcat(one.x_coord,one.y_coord)) + t[nt] = f_l(mpconcat(one.x_coord,one.y_coord)) end return t end @@ -431,7 +443,7 @@ function metapost.flush(result,flusher,askedfig) elseif objecttype == "text" then t[#t+1] = "q" local ot = object.transform -- 3,4,5,6,1,2 - t[#t+1] = formatters["%f %f %f %f %f %f cm"](ot[3],ot[4],ot[5],ot[6],ot[1],ot[2]) -- TH: formatters["%f %f m %f %f %f %f 0 0 cm"](unpack(ot)) + t[#t+1] = f_cm(ot[3],ot[4],ot[5],ot[6],ot[1],ot[2]) -- TH: formatters["%F %F m %F %F %F %F 0 0 cm"](unpack(ot)) flushfigure(t) -- flush accumulated literals t = { } textfigure(object.font,object.dsize,object.text,object.width,object.height,object.depth) @@ -456,21 +468,21 @@ function metapost.flush(result,flusher,askedfig) local ml = object.miterlimit if ml and ml ~= miterlimit then miterlimit = ml - t[#t+1] = formatters["%f M"](ml) + t[#t+1] = f_M(ml) end local lj = object.linejoin if lj and lj ~= linejoin then linejoin = lj - t[#t+1] = formatters["%i j"](lj) + t[#t+1] = f_j(lj) end local lc = object.linecap if lc and lc ~= linecap then linecap = lc - t[#t+1] = formatters["%i J"](lc) + t[#t+1] = f_J(lc) end local dl = object.dash if dl then - local d = formatters["[%s] %f d"](concat(dl.dashes or {}," "),dl.offset) + local d = f_d(concat(dl.dashes or {}," "),dl.offset) if d ~= dashed then dashed = d t[#t+1] = dashed @@ -486,7 +498,7 @@ function metapost.flush(result,flusher,askedfig) if pen then if pen.type == 'elliptical' then transformed, penwidth = pen_characteristics(original) -- boolean, value - t[#t+1] = formatters["%f w"](penwidth) -- todo: only if changed + t[#t+1] = f_w(penwidth) -- todo: only if changed if objecttype == 'fill' then objecttype = 'both' end @@ -506,7 +518,7 @@ function metapost.flush(result,flusher,askedfig) if objecttype == "fill" then t[#t+1] = "h f" elseif objecttype == "outline" then - t[#t+1] = (open and "S") or "h S" + t[#t+1] = open and "S" or "h S" elseif objecttype == "both" then t[#t+1] = "h B" end @@ -527,7 +539,7 @@ function metapost.flush(result,flusher,askedfig) if objecttype == "fill" then t[#t+1] = "h f" elseif objecttype == "outline" then - t[#t+1] = (open and "S") or "h S" + t[#t+1] = open and "S" or "h S" elseif objecttype == "both" then t[#t+1] = "h B" end diff --git a/tex/context/base/mlib-pps.lua b/tex/context/base/mlib-pps.lua index 385fb3ece..ce95d5ca7 100644 --- a/tex/context/base/mlib-pps.lua +++ b/tex/context/base/mlib-pps.lua @@ -18,6 +18,9 @@ local formatters = string.formatters local mplib, metapost, lpdf, context = mplib, metapost, lpdf, context +local context = context +local context_setvalue = context.setvalue + local texgetbox = tex.getbox local texsetbox = tex.setbox local copy_list = node.copy_list @@ -82,10 +85,13 @@ function metapost.setoutercolor(mode,colormodel,colorattribute,transparencyattri innertransparency = outertransparency -- not yet used end -local f_gray = formatters["%.3f g %.3f G"] -local f_rgb = formatters["%.3f %.3f %.3f rg %.3f %.3f %.3f RG"] -local f_cmyk = formatters["%.3f %.3f %.3f %.3f k %.3f %.3f %.3f %.3f K"] -local f_cm = formatters["q %f %f %f %f %f %f cm"] +local f_f = formatters["%F"] +local f_f3 = formatters["%.3F"] + +local f_gray = formatters["%.3F g %.3F G"] +local f_rgb = formatters["%.3F %.3F %.3F rg %.3F %.3F %.3F RG"] +local f_cmyk = formatters["%.3F %.3F %.3F %.3F k %.3F %.3F %.3F %.3F K"] +local f_cm = formatters["q %F %F %F %F %F %F cm"] local f_shade = formatters["MpSh%s"] local function checked_color_pair(color,...) @@ -482,8 +488,8 @@ local factor = 65536*(7227/7200) function metapost.edefsxsy(wd,ht,dp) -- helper for figure local hd = ht + dp - context.setvalue("sx",wd ~= 0 and factor/wd or 0) - context.setvalue("sy",hd ~= 0 and factor/hd or 0) + context_setvalue("sx",wd ~= 0 and factor/wd or 0) + context_setvalue("sy",hd ~= 0 and factor/hd or 0) end local function sxsy(wd,ht,dp) -- helper for text @@ -860,7 +866,11 @@ local function tx_reset() end local fmt = formatters["%s %s %s % t"] -local pat = tsplitat(":") +----- pat = tsplitat(":") +local pat = lpeg.tsplitter(":",tonumber) -- so that %F can do its work + +local ctx_MPLIBsetNtext = context.MPLIBsetNtext +local ctx_MPLIBsetCtext = context.MPLIBsetCtext local function tx_analyze(object,prescript) -- todo: hash content and reuse them local tx_stage = prescript.tx_stage @@ -884,27 +894,28 @@ local function tx_analyze(object,prescript) -- todo: hash content and reuse them local tx_last = top.texlast + 1 top.texlast = tx_last if not c then - -- no color + ctx_MPLIBsetNtext(tx_last,s) elseif #c == 1 then if a and t then - s = formatters["\\directcolored[s=%f,a=%f,t=%f]%s"](c[1],a,t,s) + ctx_MPLIBsetCtext(tx_last,formatters["s=%F,a=%F,t=%F"](c[1],a,t),s) else - s = formatters["\\directcolored[s=%f]%s"](c[1],s) + ctx_MPLIBsetCtext(tx_last,formatters["s=%F"](c[1]),s) end elseif #c == 3 then if a and t then - s = formatters["\\directcolored[r=%f,g=%f,b=%f,a=%f,t=%f]%s"](c[1],c[2],c[3],a,t,s) + ctx_MPLIBsetCtext(tx_last,formatters["r=%F,g=%F,b=%F,a=%F,t=%F"](c[1],c[2],c[3],a,t),s) else - s = formatters["\\directcolored[r=%f,g=%f,b=%f]%s"](c[1],c[2],c[3],s) + ctx_MPLIBsetCtext(tx_last,formatters["r=%F,g=%F,b=%F"](c[1],c[2],c[3]),s) end elseif #c == 4 then if a and t then - s = formatters["\\directcolored[c=%f,m=%f,y=%f,k=%f,a=%f,t=%f]%s"](c[1],c[2],c[3],c[4],a,t,s) + ctx_MPLIBsetCtext(tx_last,formatters["c=%F,m=%F,y=%F,k=%F,a=%F,t=%F"](c[1],c[2],c[3],c[4],a,t),s) else - s = formatters["\\directcolored[c=%f,m=%f,y=%f,k=%f]%s"](c[1],c[2],c[3],c[4],s) + ctx_MPLIBsetCtext(tx_last,formatters["c=%F,m=%F,y=%F,k=%F"](c[1],c[2],c[3],c[4]),s) end + else + ctx_MPLIBsetNtext(tx_last,s) end - context.MPLIBsettext(tx_last,s) top.multipass = true metapost.multipass = true -- ugly top.texhash[h] = tx_last @@ -956,12 +967,12 @@ local function tx_process(object,prescript,before,after) before[#before+1] = function() -- flush always happens, we can have a special flush function injected before context.MPLIBgettextscaledcm(n, - format("%f",sx), -- bah ... %s no longer checks - format("%f",rx), -- bah ... %s no longer checks - format("%f",ry), -- bah ... %s no longer checks - format("%f",sy), -- bah ... %s no longer checks - format("%f",tx), -- bah ... %s no longer checks - format("%f",ty), -- bah ... %s no longer checks + f_f(sx), -- bah ... %s no longer checks + f_f(rx), -- bah ... %s no longer checks + f_f(ry), -- bah ... %s no longer checks + f_f(sy), -- bah ... %s no longer checks + f_f(tx), -- bah ... %s no longer checks + f_f(ty), -- bah ... %s no longer checks sxsy(box.width,box.height,box.depth)) end else @@ -1136,7 +1147,7 @@ end -- color and transparency local value = Cs ( ( - (Carg(1) * C((1-P(","))^1)) / function(a,b) return format("%0.3f",a * tonumber(b)) end + (Carg(1) * C((1-P(","))^1)) / function(a,b) return f_f3(a * tonumber(b)) end + P(","))^1 ) diff --git a/tex/context/base/mlib-pps.mkiv b/tex/context/base/mlib-pps.mkiv index e16827585..207d9730c 100644 --- a/tex/context/base/mlib-pps.mkiv +++ b/tex/context/base/mlib-pps.mkiv @@ -60,27 +60,38 @@ \let\MPLIBflushenvironment\doMPLIBflushenvironment -\def\MPLIBsettext#1% #2% +\unexpanded\def\MPLIBsetNtext#1% #2% box text {\MPLIBflushenvironment \dowithnextbox{\ctxlua{metapost.settext(\number\nextbox,#1)}}\hbox\bgroup + \meta_set_current_color \let\MPLIBflushenvironment\doMPLIBflushenvironment \let\next} % gobble open brace -\def\MPLIBresettexts +\unexpanded\def\MPLIBsetCtext#1#2% #3% box colorspec text + {\MPLIBflushenvironment + \dowithnextbox{\ctxlua{metapost.settext(\number\nextbox,#1)}}\hbox\bgroup + \directcolored[#2]% + \meta_set_current_color % so, textcolor wins ! + \let\MPLIBflushenvironment\doMPLIBflushenvironment + \let\next} % gobble open brace + +\let\MPLIBsettext\MPLIBsetNtext + +\unexpanded\def\MPLIBresettexts {\ctxlua{metapost.resettextexts()}} -\def\MPLIBgettextscaled#1#2#3% why a copy .. can be used more often +\unexpanded\def\MPLIBgettextscaled#1#2#3% why a copy .. can be used more often {\ctxlua{metapost.gettext(\number\MPtextbox,#1)}% \vbox to \zeropoint{\vss\hbox to \zeropoint{\scale[\c!sx=#2,\c!sy=#3]{\raise\dp\MPtextbox\box\MPtextbox}\forcecolorhack\hss}}} -\def\MPLIBfigure#1#2% +\unexpanded\def\MPLIBfigure#1#2% {\setbox\scratchbox\hbox{\externalfigure[#1][\c!mask=#2]}% \ctxlua{metapost.edefsxsy(\number\wd\scratchbox,\number\ht\scratchbox,0)}% \vbox to \zeropoint{\vss\hbox to \zeropoint{\scale[\c!sx=\sx,\c!sy=\sy]{\box\scratchbox}\hss}}} % horrible (we could inline scale and matrix code): -\def\MPLIBgettextscaledcm#1#2#3#4#5#6#7#8#9% 2-7: sx,rx,ry,sy,tx,ty +\unexpanded\def\MPLIBgettextscaledcm#1#2#3#4#5#6#7#8#9% 2-7: sx,rx,ry,sy,tx,ty {\ctxlua{metapost.gettext(\number\MPtextbox,#1)}% \setbox\MPbox\hbox\bgroup \dotransformnextbox{#2}{#3}{#4}{#5}{#6}{#7}% does push pop ... will be changed to proper lua call (avoid small numbers) @@ -103,7 +114,7 @@ \smashbox\MPbox \box\MPbox} -\def\MPLIBgraphictext#1% use at mp end +\unexpanded\def\MPLIBgraphictext#1% use at mp end {\startTEXpage[\c!scale=10000]#1\stopTEXpage} %D \startbuffer @@ -132,7 +143,7 @@ %D %D \typebuffer \startlinecorrection \getbuffer \stoplinecorrection -\def\MPLIBpositionwhd#1#2#3#4#5% bp ! +\unexpanded\def\MPLIBpositionwhd#1#2#3#4#5% bp ! {\dosavepositionwhd{#1}\zerocount{#2\onebasepoint}{#3\onebasepoint}{#4\onebasepoint}{#5\onebasepoint}\zeropoint} % \def\MPLIBextrapass#1% @@ -158,9 +169,9 @@ \box\scratchbox \endgroup} -\def\MPLIBstartgroup#1#2#3#4#5#6% isolated 0/1, knockout 0/1 llx lly urx ury +\unexpanded\def\MPLIBstartgroup#1#2#3#4#5#6% isolated 0/1, knockout 0/1 llx lly urx ury {\begingroup \setbox\scratchbox\hbox\bgroup - \def\MPLIBstopgroup{\doMPLIBstopgroup{#1}{#2}{#3}{#4}{#5}{#6}}} + \unexpanded\def\MPLIBstopgroup{\doMPLIBstopgroup{#1}{#2}{#3}{#4}{#5}{#6}}} \protect \endinput diff --git a/tex/context/base/mlib-run.lua b/tex/context/base/mlib-run.lua index f30ed0c9f..2a34f44d5 100644 --- a/tex/context/base/mlib-run.lua +++ b/tex/context/base/mlib-run.lua @@ -121,7 +121,7 @@ local function o_finder(name,mode,ftype) return name end -local function finder(name, mode, ftype) +local function finder(name,mode,ftype) if mode == "w" then return o_finder(name,mode,ftype) else @@ -295,17 +295,28 @@ else local methods = { double = "double", scaled = "scaled", + binary = "binary", + decimal = "decimal", default = "scaled", - decimal = false, -- for the moment } + function metapost.runscript(code) + return code + end + + function metapost.scripterror(str) + report_metapost("script error: %s",str) + end + function metapost.load(name,method) starttiming(mplib) method = method and methods[method] or "scaled" local mpx = mplib.new { - ini_version = true, - find_file = finder, - math_mode = method, + ini_version = true, + find_file = finder, + math_mode = method, + run_script = metapost.runscript, + script_error = metapost.scripterror, } report_metapost("initializing number mode %a",method) local result @@ -402,6 +413,10 @@ local mp_inp, mp_log, mp_tag = { }, { }, 0 -- key/values +if not metapost.initializescriptrunner then + function metapost.initializescriptrunner() end +end + function metapost.process(mpx, data, trialrun, flusher, multipass, isextrapass, askedfig) local converted, result = false, { } if type(mpx) == "string" then @@ -409,6 +424,7 @@ function metapost.process(mpx, data, trialrun, flusher, multipass, isextrapass, end if mpx and data then starttiming(metapost) + metapost.initializescriptrunner(mpx,trialrun) if trace_graphics then if not mp_inp[mpx] then mp_tag = mp_tag + 1 @@ -625,3 +641,20 @@ function metapost.quickanddirty(mpxformat,data) report_metapost("invalid quick and dirty run") end end + +function metapost.getstatistics(memonly) + if memonly then + local n, m = 0, 0 + for name, mpx in next, mpxformats do + n = n + 1 + m = m + mpx:statistics().memory + end + return n, m + else + local t = { } + for name, mpx in next, mpxformats do + t[name] = mpx:statistics() + end + return t + end +end diff --git a/tex/context/base/mult-aux.lua b/tex/context/base/mult-aux.lua index bdc626d4c..5a103213c 100644 --- a/tex/context/base/mult-aux.lua +++ b/tex/context/base/mult-aux.lua @@ -54,7 +54,7 @@ function namespaces.define(namespace,settings) if trace_namespaces then report_namespaces("namespace %a for %a uses parent %a",namespace,name,parent) end - if not find(parent,"\\") then + if not find(parent,"\\",1,true) then parent = "\\" .. prefix .. parent -- todo: check if defined end diff --git a/tex/context/base/mult-aux.mkiv b/tex/context/base/mult-aux.mkiv index 6c44a0ec9..1811f9592 100644 --- a/tex/context/base/mult-aux.mkiv +++ b/tex/context/base/mult-aux.mkiv @@ -106,10 +106,14 @@ \doubleexpandafter\gobbleoneargument \else \mult_interfaces_get_parameters_assign#1==\empty\_e_o_p_ - \doubleexpandafter\mult_interfaces_get_parameters_item + % \doubleexpandafter\mult_interfaces_get_parameters_item % saves skipping when at end \fi\fi#2} -\def\mult_interfaces_get_parameters_error#1#2#3% +\def\mult_interfaces_get_parameters_error#1#2% #3% + {\mult_interfaces_get_parameters_error_indeed{#1}{#2}% + \gobbleoneargument} + +\def\mult_interfaces_get_parameters_error_indeed#1#2% {\showassignerror{#2}{\the\inputlineno\space(#1)}} \def\mult_interfaces_get_parameters_assign#1=#2=#3#4\_e_o_p_ @@ -118,9 +122,54 @@ \else\ifx#3\empty \doubleexpandafter\mult_interfaces_get_parameters_error \else - \doubleexpandafter\dosetvalue + \doubleexpandafter\mult_interfaces_def \fi\fi - \m_mult_interfaces_namespace{#1}{#2}} + \m_mult_interfaces_namespace{#1}{#2}% + \doubleexpandafter\mult_interfaces_get_parameters_item} + +\startinterface english + + % some 10% faster + + \let\mult_interfaces_get_parameters_error\undefined + + \def\mult_interfaces_get_parameters_error_one#1\csname#2#3\endcsname#4% + {\mult_interfaces_get_parameters_error_indeed{#2}{#3}\iftrue} + + \def\mult_interfaces_get_parameters_error_two#1\csname#2#3\endcsname#4% + {\mult_interfaces_get_parameters_error_indeed{#2}{#3}} + + \def\mult_interfaces_get_parameters_assign#1=#2=#3#4\_e_o_p_ + {\ifx\empty#1\empty + \mult_interfaces_get_parameters_error_one + \else\ifx#3\empty + \mult_interfaces_get_parameters_error_two + \else + \expandafter\def\csname\m_mult_interfaces_namespace#1\endcsname{#2}% + \fi\fi + \doubleexpandafter\mult_interfaces_get_parameters_item} + + % interesting but not faster + % + % \def\mult_interfaces_get_parameters_error_one#1\m_mult_interfaces_namespace#2\fi\fi% + % {\mult_interfaces_get_parameters_error_indeed\m_mult_interfaces_namespace{#2}\m_mult_interfaces_namespace\s!dummy\fi} + % + % \def\mult_interfaces_get_parameters_error_two#1\m_mult_interfaces_namespace#2\fi\fi% + % {\mult_interfaces_get_parameters_error_indeed\m_mult_interfaces_namespace{#2}\m_mult_interfaces_namespace\s!dummy\fi\fi} + % + % \def\mult_interfaces_get_parameters_assign#1=#2=#3#4\_e_o_p_ + % {\expandafter\def\csname + % \ifx\empty#1\empty + % \mult_interfaces_get_parameters_error_one + % \else\ifx#3\empty + % \mult_interfaces_get_parameters_error_two + % \else + % \m_mult_interfaces_namespace#1% + % \fi\fi + % \endcsname{#2} + % \doubleexpandafter\mult_interfaces_get_parameters_item} + +\stopinterface \newif\ifassignment @@ -132,6 +181,24 @@ % End of experimental code. +\unexpanded\def\mult_interfaces_let #1#2{\expandafter\let \csname#1\ifcsname\k!prefix!#2\endcsname\csname\k!prefix!#2\endcsname\else#2\fi\endcsname} +\unexpanded\def\mult_interfaces_lete#1#2{\expandafter\let \csname#1\ifcsname\k!prefix!#2\endcsname\csname\k!prefix!#2\endcsname\else#2\fi\endcsname\empty} +\unexpanded\def\mult_interfaces_def #1#2{\expandafter\def \csname#1\ifcsname\k!prefix!#2\endcsname\csname\k!prefix!#2\endcsname\else#2\fi\endcsname} +\unexpanded\def\mult_interfaces_edef#1#2{\expandafter\edef\csname#1\ifcsname\k!prefix!#2\endcsname\csname\k!prefix!#2\endcsname\else#2\fi\endcsname} +\unexpanded\def\mult_interfaces_gdef#1#2{\expandafter\gdef\csname#1\ifcsname\k!prefix!#2\endcsname\csname\k!prefix!#2\endcsname\else#2\fi\endcsname} +\unexpanded\def\mult_interfaces_xdef#1#2{\expandafter\xdef\csname#1\ifcsname\k!prefix!#2\endcsname\csname\k!prefix!#2\endcsname\else#2\fi\endcsname} + +\startinterface english + + \unexpanded\def\mult_interfaces_let #1#2{\expandafter \let\csname#1#2\endcsname} + \unexpanded\def\mult_interfaces_lete#1#2{\expandafter \let\csname#1#2\endcsname\empty} + \unexpanded\def\mult_interfaces_def #1#2{\expandafter \def\csname#1#2\endcsname} + \unexpanded\def\mult_interfaces_edef#1#2{\expandafter\edef\csname#1#2\endcsname} + \unexpanded\def\mult_interfaces_gdef#1#2{\expandafter\gdef\csname#1#2\endcsname} + \unexpanded\def\mult_interfaces_xdef#1#2{\expandafter\xdef\csname#1#2\endcsname} + +\stopinterface + % the commented detokenized variant that backtracks ... needs testing usage first % % \let\whatever\relax @@ -207,14 +274,14 @@ % In \MKIV\ we can probably use the english variant for all other % languages too. -% todo: inline the \do*value +% todo: inline the def/let \unexpanded\def\mult_interfaces_install_parameter_set_handler#1#2#3#4#5#6% {\ifx#2\relax\let#2\empty\fi - \unexpanded\def#3{\dosetvalue {#1#2:}}% ##1 {##2} (braces are mandate) - \unexpanded\def#4{\dosetevalue{#1#2:}}% ##1 {##2} (braces are mandate) - \unexpanded\def#5{\doletvalue {#1#2:}}% ##1 ##2 - \unexpanded\def#6{\doletvalue {#1#2:}\empty}}% ##1 + \unexpanded\def#3{\mult_interfaces_def {#1#2:}}% ##1 {##2} (braces are mandate) + \unexpanded\def#4{\mult_interfaces_edef{#1#2:}}% ##1 {##2} (braces are mandate) + \unexpanded\def#5{\mult_interfaces_let {#1#2:}}% ##1 ##2 + \unexpanded\def#6{\mult_interfaces_lete{#1#2:}}}% ##1 \startinterface english @@ -548,10 +615,10 @@ \expandafter\noexpand\csname everysetup#2\endcsname}} \unexpanded\def\mult_interfaces_install_direct_parameter_set_handler#1#2#3#4#5% - {\unexpanded\def#2{\dosetvalue #1}% - \unexpanded\def#3{\dosetevalue#1}% - \unexpanded\def#4{\doletvalue #1}% - \unexpanded\def#5{\doletvalue #1\empty}}% + {\unexpanded\def#2{\mult_interfaces_def #1}% + \unexpanded\def#3{\mult_interfaces_edef#1}% + \unexpanded\def#4{\mult_interfaces_let #1}% + \unexpanded\def#5{\mult_interfaces_let #1\empty}}% \startinterface english @@ -694,9 +761,8 @@ \ctxcommand{registernamespace(\number\c_mult_interfaces_n_of_namespaces,"#1")}% \fi} -\def\mult_interfaces_get_parameters_error#1#2#3% redefined - {\ctxcommand{showassignerror("#1","#2","#3",\the\inputlineno)}% - \waitonfatalerror} +\def\mult_interfaces_get_parameters_error_indeed#1#2% + {\ctxcommand{showassignerror("#1","#2",\the\inputlineno)}} % no longer \waitonfatalerror % We install two core namespaces here, as we want nice error messages. Maybe % we will reserve the first 9. @@ -856,4 +922,198 @@ %D \edef\m_class_whatever{whatever} %D \stoptyping +% experiment: in principle this is faster but not that noticeable as we don't do that +% many assignments and mechanism that do are also slow; the advantage is mostly nicer +% in tracing + +\def\s!simple{simple} +\def\s!single{single} +\def\s!double{double} +\def\s!triple{triple} + +\unexpanded\def\syst_helpers_double_empty#1#2#3% + {\syst_helpers_argument_reset + \doifnextoptionalelse + {\syst_helpers_double_empty_one_yes_mult#2#3}% + {\syst_helpers_double_empty_one_nop_mult#1}} + +\def\syst_helpers_double_empty_one_yes_mult#1#2[#3]% + {\firstargumenttrue + \doifnextoptionalelse + {\secondargumenttrue#2[{#3}]}% + {\syst_helpers_double_empty_two_nop_mult#1{#3}}} + +\def\syst_helpers_double_empty_one_nop_mult% #1% + {\firstargumentfalse + \secondargumentfalse + }% #1} + +\def\syst_helpers_double_empty_two_nop_mult + {\secondargumentfalse + \if_next_blank_space_token + \expandafter\syst_helpers_double_empty_one_spaced_mult + \else + \expandafter\syst_helpers_double_empty_one_normal_mult + \fi} + +\def\syst_helpers_double_empty_one_spaced_mult#1#2{#1[{#2}] } +\def\syst_helpers_double_empty_one_normal_mult#1#2{#1[{#2}]} + +\unexpanded\def\mult_interfaces_install_setup_handler#1#2#3#4#5#6#7#8% + {\ifx#3\relax\let#3\empty\fi + \unexpanded\def#5{\mult_interfaces_get_parameters{#1#3:}}% no every ! don't change it + \newtoks#4% + \newtoks#7% + \edef\m_mult_interface_setup{\strippedcsname#2_}% + \unexpanded\edef#2{\syst_helpers_double_empty + \csname\m_mult_interface_setup\s!simple\endcsname + \csname\m_mult_interface_setup\s!single\endcsname + \csname\m_mult_interface_setup\s!double\endcsname}% + \unexpanded\expandafter\def\csname\m_mult_interface_setup\s!double\endcsname[##1][##2]% + {\let#6#3% + \def#8####1% we will have a simple one as well + {\edef#3{####1}% + \mult_interfaces_get_parameters{#1#3:}[##2]% + \the#4}% + \processcommalist[##1]#8% + \let#3#6% + \the#7}% + \unexpanded\expandafter\def\csname\m_mult_interface_setup\s!single\endcsname[##1]% + {\let#6#3% + \let#3\empty + \mult_interfaces_get_parameters{#1:}[##1]% + \the#4% + \let#3#6% + \the#7}% + \unexpanded\expandafter\def\csname\m_mult_interface_setup\s!simple\endcsname% + {\let#6#3% + \let#3\empty + \the#4% + \let#3#6% + \the#7}} + +\unexpanded\def\installsetuphandler#1#2% + {\normalexpanded + {\mult_interfaces_install_setup_handler + {\noexpand#1}% \??aa + \expandafter\noexpand\csname setup#2\endcsname + \expandafter\noexpand\csname current#2\endcsname + \expandafter\noexpand\csname everysetup#2\endcsname + \expandafter\noexpand\csname setupcurrent#2\endcsname + \expandafter\noexpand\csname saved_setup_current#2\endcsname + \expandafter\noexpand\csname everysetup#2root\endcsname + \expandafter\noexpand\csname nested_setup_current#2\endcsname}} + +\unexpanded\def\syst_helpers_triple_empty#1#2#3#4% + {\syst_helpers_argument_reset + \doifnextoptionalelse + {\syst_helpers_triple_empty_one_yes_mult#2#3#4}% + {\syst_helpers_triple_empty_one_nop_mult#1}} + +\def\syst_helpers_triple_empty_one_yes_mult#1#2#3[#4]% + {\firstargumenttrue + \doifnextoptionalelse + {\syst_helpers_triple_empty_two_yes_mult#2#3{#4}}% + {\syst_helpers_triple_empty_two_nop_mult#1{#4}}} + +\def\syst_helpers_triple_empty_two_yes_mult#1#2#3[#4]% + {\secondargumenttrue + \doifnextoptionalelse + {\thirdargumenttrue#2[{#3}][{#4}]}% + {\syst_helpers_triple_empty_three_nop_mult#1{#3}{#4}}} + +\def\syst_helpers_triple_empty_one_nop_mult % #1% + {\firstargumentfalse + \secondargumentfalse + \thirdargumentfalse + } % #1 + +\def\syst_helpers_triple_empty_two_nop_mult + {\secondargumentfalse + \thirdargumentfalse + \if_next_blank_space_token + \expandafter\syst_helpers_triple_empty_two_spaced_mult + \else + \expandafter\syst_helpers_triple_empty_two_normal_mult + \fi} + +\def\syst_helpers_triple_empty_three_nop_mult + {\thirdargumentfalse + \if_next_blank_space_token + \expandafter\syst_helpers_triple_empty_three_spaced_mult + \else + \expandafter\syst_helpers_triple_empty_three_normal_mult + \fi} + +\def\syst_helpers_triple_empty_two_spaced_mult #1#2{#1[{#2}] } +\def\syst_helpers_triple_empty_two_normal_mult #1#2{#1[{#2}]} +\def\syst_helpers_triple_empty_three_spaced_mult#1#2#3{#1[{#2}][{#3}] } +\def\syst_helpers_triple_empty_three_normal_mult#1#2#3{#1[{#2}][{#3}]} + +\unexpanded\def\mult_interfaces_install_auto_setup_handler#1#2#3#4#5#6#7#8% + {\ifx#3\relax\let#3\empty\fi + \unexpanded\def#5{\mult_interfaces_get_parameters{#1#3:}}% + \newtoks#4% + \edef\m_mult_interface_setup{\strippedcsname#2_}% + \unexpanded\edef#2{\syst_helpers_triple_empty + \csname\m_mult_interface_setup\s!simple\endcsname + \csname\m_mult_interface_setup\s!single\endcsname + \csname\m_mult_interface_setup\s!double\endcsname + \csname\m_mult_interface_setup\s!triple\endcsname}% + \unexpanded\expandafter\def\csname\m_mult_interface_setup\s!triple\endcsname[##1][##2][##3]% + {\let#7#3% + \def#8####1% + {\edef#3{####1}% + \expandafter\def\csname#1#3:\s!parent\endcsname{#1##2}% + \mult_interfaces_get_parameters{#1#3:}[##3]% always sets parent + \the#4}% + \processcommalist[##1]#8% + \let#3#7}% + \unexpanded\expandafter\def\csname\m_mult_interface_setup\s!double\endcsname[##1][##2]% + {\let#7#3% + \def#8####1% + {\edef#3{####1}% + #6% checks parent and sets if needed + \mult_interfaces_get_parameters{#1#3:}[##2]% + \the#4}% + \processcommalist[##1]#8% + \let#3#7}% + \unexpanded\expandafter\def\csname\m_mult_interface_setup\s!single\endcsname[##1]% + {\let#7#3% + \let#3\empty + \mult_interfaces_get_parameters{#1:}[##1]% + \the#4% + \let#3#7}% + \unexpanded\expandafter\def\csname\m_mult_interface_setup\s!simple\endcsname% + {\let#7#3% + \let#3\empty + \the#4% + \let#3#7}} + +\unexpanded\def\installautosetuphandler#1#2% + {\normalexpanded + {\mult_interfaces_install_auto_setup_handler + {\noexpand#1}% \??aa + \expandafter\noexpand\csname setup#2\endcsname + \expandafter\noexpand\csname current#2\endcsname + \expandafter\noexpand\csname everysetup#2\endcsname + \expandafter\noexpand\csname setupcurrent#2\endcsname + \expandafter\noexpand\csname check#2parent\endcsname + \expandafter\noexpand\csname saved_setup_current#2\endcsname + \expandafter\noexpand\csname nested_setup_current#2\endcsname}} + +% okay, we can also get rid of the #9, but thsi code looks pretty bad, while the previous is +% still okay given that we can also use #6 as setup1 (so in fact we can save some cs again and +% only use one extra) +% +% \global\advance\commalevel \plusone +% \expandafter\def\csname\??nextcommalevel\the\commalevel\endcsname####1,% +% {\edef#3{####1}% +% \mult_interfaces_get_parameters{#1#3:}[##2]% +% \the#5% +% \syst_helpers_do_process_comma_item}% +% \expandafter\syst_helpers_do_do_process_comma_item\gobbleoneargument\relax##1,]\relax +% % \syst_helpers_do_do_process_comma_item##1,]\relax +% \global\advance\commalevel \minusone + \protect diff --git a/tex/context/base/mult-def.lua b/tex/context/base/mult-def.lua index 65db8fd5e..fc2b932c2 100644 --- a/tex/context/base/mult-def.lua +++ b/tex/context/base/mult-def.lua @@ -3055,7 +3055,7 @@ return { ["pe"]="درجشمارهصفحه", ["ro"]="punenumarpagina", }, - ["placereferencelist"]={ + ["placereferencelist"]={ -- not in mkiv ["cs"]="placereferencelist", ["de"]="placereferencelist", ["en"]="placereferencelist", @@ -9708,7 +9708,7 @@ return { ["en"]="reference", ["fr"]="reference", ["it"]="riferimento", - ["nl"]="verwijzing", + ["nl"]="referentie", ["pe"]="مرجع", ["ro"]="referinta", }, @@ -10917,7 +10917,7 @@ return { ["en"]="unknownreference", ["fr"]="referenceinconnue", ["it"]="riferimentoingoto", - ["nl"]="onbekendeverwijzing", + ["nl"]="onbekendereferentie", ["pe"]="مرجعناشناس", ["ro"]="referintanecunoscuta", }, diff --git a/tex/context/base/mult-def.mkiv b/tex/context/base/mult-def.mkiv index 9206743f4..35b212710 100644 --- a/tex/context/base/mult-def.mkiv +++ b/tex/context/base/mult-def.mkiv @@ -64,6 +64,8 @@ \def\c!group {group} \def\c!groupsuffix {groupsuffix} +\def\c!referencemethod {referencemethod} % forward both + \def\v!dataset {dataset} \def\v!compressseparator{compressseparator} \def\v!notation {notation} @@ -117,6 +119,10 @@ \def\c!etaldisplay{etaldisplay} \def\c!etaltext {etaltext} +\ifdefined\v!simplelist\else \def\v!simplelist{simplelist} \fi +\ifdefined\v!sorting \else \def\v!sorting {sorting} \fi +\ifdefined\v!synonym \else \def\v!synonym {synonym} \fi + % stop todo \protect \endinput diff --git a/tex/context/base/mult-ini.lua b/tex/context/base/mult-ini.lua index e3ff904a6..08f1639d0 100644 --- a/tex/context/base/mult-ini.lua +++ b/tex/context/base/mult-ini.lua @@ -299,12 +299,12 @@ function commands.getmessage(category,tag,default) context(interfaces.getmessage(category,tag,default)) end -function commands.showassignerror(namespace,key,value,line) - local ns, instance = match(namespace,"^(%d+)[^%a]+(%a+)") +function commands.showassignerror(namespace,key,line) + local ns, instance = match(namespace,"^(%d+)[^%a]+(%a*)") if ns then namespace = corenamespaces[tonumber(ns)] or ns end - if instance then + if instance and instance ~= "" then context.writestatus("setup",formatters["error in line %a, namespace %a, instance %a, key %a"](line,namespace,instance,key)) else context.writestatus("setup",formatters["error in line %a, namespace %a, key %a"](line,namespace,key)) diff --git a/tex/context/base/mult-low.lua b/tex/context/base/mult-low.lua index 250b20c22..86095edab 100644 --- a/tex/context/base/mult-low.lua +++ b/tex/context/base/mult-low.lua @@ -47,7 +47,7 @@ return { "inicatcodes", "ctxcatcodes", "texcatcodes", "notcatcodes", "txtcatcodes", "vrbcatcodes", "prtcatcodes", "nilcatcodes", "luacatcodes", "tpacatcodes", "tpbcatcodes", - "xmlcatcodes", + "xmlcatcodes", "ctdcatcodes", -- "escapecatcode", "begingroupcatcode", "endgroupcatcode", "mathshiftcatcode", "alignmentcatcode", "endoflinecatcode", "parametercatcode", "superscriptcatcode", "subscriptcatcode", "ignorecatcode", @@ -90,6 +90,7 @@ return { -- "startmode", "stopmode", "startnotmode", "stopnotmode", "startmodeset", "stopmodeset", "doifmode", "doifmodeelse", "doifnotmode", + "startmodeset","stopmodeset", "startallmodes", "stopallmodes", "startnotallmodes", "stopnotallmodes", "doifallmodes", "doifallmodeselse", "doifnotallmodes", "startenvironment", "stopenvironment", "environment", "startcomponent", "stopcomponent", "component", @@ -136,6 +137,7 @@ return { "starttexdefinition", "stoptexdefinition", "starttexcode", "stoptexcode", "startcontextcode", "stopcontextcode", + "startcontextdefinitioncode", "stopcontextdefinitioncode", -- "doifsetupselse", "doifsetups", "doifnotsetups", "setup", "setups", "texsetup", "xmlsetup", "luasetup", "directsetup", "doifelsecommandhandler","doifnotcommandhandler","doifcommandhandler", @@ -219,7 +221,9 @@ return { -- "doif", "doifnot", "doifelse", "doifinset", "doifnotinset", "doifinsetelse", - "doifnextcharelse", "doifnextoptionalelse", "doifnextbgroupelse", "doifnextparenthesiselse", "doiffastoptionalcheckelse", + "doifnextcharelse", "doifnextoptionalelse", "doifnextoptionalcselse", "doiffastoptionalcheckelse", + "doifnextbgroupelse", "doifnextbgroupcselse", + "doifnextparenthesiselse", "doifundefinedelse", "doifdefinedelse", "doifundefined", "doifdefined", "doifelsevalue", "doifvalue", "doifnotvalue", "doifnothing", "doifsomething", "doifelsenothing", "doifsomethingelse", @@ -353,6 +357,8 @@ return { "definenamedlua", "obeylualines", "obeyluatokens", "startluacode", "stopluacode", "startlua", "stoplua", + "startctxfunction","stopctxfunction","ctxfunction", + "startctxfunctiondefinition","stopctxfunctiondefinition", -- "carryoverpar", -- diff --git a/tex/context/base/mult-nl.mkii b/tex/context/base/mult-nl.mkii index a1f9742f1..015f58ff1 100644 --- a/tex/context/base/mult-nl.mkii +++ b/tex/context/base/mult-nl.mkii @@ -944,7 +944,7 @@ \setinterfaceconstant{reduction}{reductie} \setinterfaceconstant{ref}{ref} \setinterfaceconstant{refcommand}{refcommand} -\setinterfaceconstant{reference}{verwijzing} +\setinterfaceconstant{reference}{referentie} \setinterfaceconstant{referenceprefix}{referenceprefix} \setinterfaceconstant{referencing}{refereren} \setinterfaceconstant{region}{gebied} @@ -1094,7 +1094,7 @@ \setinterfaceconstant{totalnumber}{totalnumber} \setinterfaceconstant{type}{type} \setinterfaceconstant{unit}{eenheid} -\setinterfaceconstant{unknownreference}{onbekendeverwijzing} +\setinterfaceconstant{unknownreference}{onbekendereferentie} \setinterfaceconstant{urlalternative}{urlvariant} \setinterfaceconstant{urlspace}{urlspatie} \setinterfaceconstant{validate}{valideer} diff --git a/tex/context/base/mult-prm.lua b/tex/context/base/mult-prm.lua index e6fa4abcc..f0b850a5c 100644 --- a/tex/context/base/mult-prm.lua +++ b/tex/context/base/mult-prm.lua @@ -235,6 +235,7 @@ return { "luatexdatestamp", "luatexrevision", "luatexversion", + "luafunction", "mathstyle", "nokerns", "noligs", @@ -573,10 +574,10 @@ return { "catcodetable", "char", "chardef", - "chardp", - "charht", - "charit", - "charwd", +--"chardp", +--"charht", +--"charit", +--"charwd", "cleaders", "clearmarks", "closein", diff --git a/tex/context/base/node-aux.lua b/tex/context/base/node-aux.lua index 7f4b0342a..499116258 100644 --- a/tex/context/base/node-aux.lua +++ b/tex/context/base/node-aux.lua @@ -49,6 +49,7 @@ local copy_node_list = nuts.copy_list local find_tail = nuts.tail local insert_node_after = nuts.insert_after local isnode = nuts.is_node +local getbox = nuts.getbox local nodes_traverse_id = nodes.traverse_id local nodes_first_glyph = nodes.first_glyph @@ -61,8 +62,52 @@ local unsetvalue = attributes.unsetvalue local current_font = font.current +local texsetbox = tex.setbox + local report_error = logs.reporter("node-aux:error") +-- At some point we figured that copying before using was the safest bet +-- when dealing with boxes at the tex end. This is because tex also needs +-- to manage the grouping (i.e. savestack). However, there is an easy +-- solution that keeps the tex end happy as tex.setbox deals with this. The +-- overhead of one temporary list node is neglectable. +-- +-- function tex.takebox(id) +-- local box = tex.getbox(id) +-- if box then +-- local copy = node.copy(box) +-- local list = box.list +-- copy.list = list +-- box.list = nil +-- tex.setbox(id,nil) +-- return copy +-- end +-- end + +local function takebox(id) + local box = getbox(id) + if box then + local copy = copy_node(box) + local list = getlist(box) + setfield(copy,"list",list) + setfield(box,"list",nil) + texsetbox(id,nil) + return copy + end +end + +function nodes.takebox(id) + local b = takebox(id) + if b then + return tonode(b) + end +end + +nuts.takebox = takebox +tex.takebox = nodes.takebox -- sometimes more clear + +-- so far + local function repackhlist(list,...) local temp, b = hpack_nodes(list,...) list = getlist(temp) diff --git a/tex/context/base/node-fin.lua b/tex/context/base/node-fin.lua index 8476b47a6..250035f39 100644 --- a/tex/context/base/node-fin.lua +++ b/tex/context/base/node-fin.lua @@ -20,14 +20,13 @@ local tonode = nuts.tonode local tonut = nuts.tonut local getfield = nuts.getfield +local setfield = nuts.setfield local getnext = nuts.getnext local getprev = nuts.getprev local getid = nuts.getid local getlist = nuts.getlist local getleader = nuts.getleader local getattr = nuts.getattr - -local setfield = nuts.setfield local setattr = nuts.setattr local copy_node = nuts.copy diff --git a/tex/context/base/node-ini.lua b/tex/context/base/node-ini.lua index 652b46caf..a9ef305c0 100644 --- a/tex/context/base/node-ini.lua +++ b/tex/context/base/node-ini.lua @@ -220,6 +220,8 @@ listcodes.column = listcodes.alignment kerncodes.italiccorrection = kerncodes.userkern kerncodes.kerning = kerncodes.fontkern +whatcodes.textdir = whatcodes.dir + nodes.codes = allocate { -- mostly for listing glue = skipcodes, noad = noadcodes, diff --git a/tex/context/base/node-ini.mkiv b/tex/context/base/node-ini.mkiv index e99653327..5fc519069 100644 --- a/tex/context/base/node-ini.mkiv +++ b/tex/context/base/node-ini.mkiv @@ -19,10 +19,9 @@ \registerctxluafile{node-ini}{1.001} \registerctxluafile{node-met}{1.001} - -\ctxlua{if nodes.gonuts then context.registerctxluafile("node-nut","1.001") end} - +\registerctxluafile{node-nut}{1.001} \registerctxluafile{node-res}{1.001} +\registerctxluafile{node-ppt}{1.001} % experimental \registerctxluafile{node-dir}{1.001} \registerctxluafile{node-aux}{1.001} \registerctxluafile{node-tst}{1.001} @@ -36,6 +35,8 @@ \registerctxluafile{node-acc}{1.001} % experimental %registerctxluafile{node-prp}{1.001} % makes no sense (yet) +\doiffileelse{node-ppt.lua}{\registerctxluafile{node-ppt}{1.001}}{} + \newcount\c_node_tracers_show_box % box number \unexpanded\def\shownextnodes{\afterassignment\node_tracers_show_next\c_node_tracers_show_box} diff --git a/tex/context/base/node-inj.lua b/tex/context/base/node-inj.lua index f30070e9e..b91646ffc 100644 --- a/tex/context/base/node-inj.lua +++ b/tex/context/base/node-inj.lua @@ -8,8 +8,7 @@ if not modules then modules = { } end modules ['node-inj'] = { -- This is very experimental (this will change when we have luatex > .50 and -- a few pending thingies are available. Also, Idris needs to make a few more --- test fonts. Btw, future versions of luatex will have extended glyph properties --- that can be of help. Some optimizations can go away when we have faster machines. +-- test fonts. Some optimizations can go away when we have faster machines. -- todo: ignore kerns between disc and glyph @@ -30,7 +29,6 @@ local injections = nodes.injections local nodecodes = nodes.nodecodes local glyph_code = nodecodes.glyph -local disc_code = nodecodes.disc local kern_code = nodecodes.kern local nuts = nodes.nuts @@ -58,7 +56,7 @@ local insert_node_before = nuts.insert_before local insert_node_after = nuts.insert_after local a_kernpair = attributes.private('kernpair') -local a_ligacomp = attributes.private('ligacomp') +----- a_ligacomp = attributes.private('ligacomp') local a_markbase = attributes.private('markbase') local a_markmark = attributes.private('markmark') local a_markdone = attributes.private('markdone') @@ -127,9 +125,9 @@ function injections.setkern(current,factor,rlmode,x,tfmchr) end end -function injections.setmark(start,base,factor,rlmode,ba,ma,index,baseismark) -- ba=baseanchor, ma=markanchor - local dx, dy = factor*(ba[1]-ma[1]), factor*(ba[2]-ma[2]) -- the index argument is no longer used but when this - local bound = getattr(base,a_markbase) -- fails again we should pass it +function injections.setmark(start,base,factor,rlmode,ba,ma) -- ba=baseanchor, ma=markanchor + local dx, dy = factor*(ba[1]-ma[1]), factor*(ba[2]-ma[2]) + local bound = getattr(base,a_markbase) local index = 1 if bound then local mb = marks[bound] @@ -144,13 +142,12 @@ function injections.setmark(start,base,factor,rlmode,ba,ma,index,baseismark) -- report_injections("possible problem, %U is base mark without data (id %a)",getchar(base),bound) end end --- index = index or 1 index = index or 1 bound = #marks + 1 setattr(base,a_markbase,bound) setattr(start,a_markmark,bound) setattr(start,a_markdone,index) - marks[bound] = { [index] = { dx, dy, rlmode, baseismark } } + marks[bound] = { [index] = { dx, dy, rlmode } } return dx, dy, bound end @@ -354,7 +351,7 @@ function injections.handler(head,where,keep) end end if maxt > 0 then - local ny = getfield(n,"yoffset") + local ny = getfield(n,"yoffset") -- hm, n unset ? for i=maxt,1,-1 do ny = ny + d[i] local ti = t[i] @@ -516,8 +513,7 @@ function injections.handler(head,where,keep) -- if trace_injections then -- show_result(head) -- end -head = tonode(head) - return head, true + return tonode(head), true elseif not keep then kerns, cursives, marks = { }, { }, { } end diff --git a/tex/context/base/node-ltp.lua b/tex/context/base/node-ltp.lua index 9f2491cfa..6ad5de140 100644 --- a/tex/context/base/node-ltp.lua +++ b/tex/context/base/node-ltp.lua @@ -1439,6 +1439,10 @@ local function post_line_break(par) elseif id < math_code then -- messy criterium break +elseif id == math_code then + -- keep the math node + setfield(next,"surround",0) + break elseif id == kern_code and (subtype ~= userkern_code and not getattr(next,a_fontkern)) then -- fontkerns and accent kerns as well as otf injections break diff --git a/tex/context/base/node-met.lua b/tex/context/base/node-met.lua index d52349b4a..335ce2a98 100644 --- a/tex/context/base/node-met.lua +++ b/tex/context/base/node-met.lua @@ -68,7 +68,7 @@ local nodes = nodes nodes.gonuts = gonuts -local nodecodes = nodes.codes +local nodecodes = nodes.nodecodes local hlist_code = nodecodes.hlist local vlist_code = nodecodes.vlist diff --git a/tex/context/base/node-nut.lua b/tex/context/base/node-nut.lua index 4732b09eb..2b4e9968c 100644 --- a/tex/context/base/node-nut.lua +++ b/tex/context/base/node-nut.lua @@ -129,7 +129,7 @@ nuts.getfield = direct.getfield nuts.getnext = direct.getnext nuts.getprev = direct.getprev nuts.getid = direct.getid -nuts.getattr = direct.getfield +nuts.getattr = direct.has_attribute or direct.getfield nuts.getchar = direct.getchar nuts.getfont = direct.getfont nuts.getsubtype = direct.getsubtype @@ -141,7 +141,7 @@ nuts.getleader = direct.getleader -- setters nuts.setfield = direct.setfield -nuts.setattr = direct.setfield +nuts.setattr = direct.set_attribute or setfield nuts.getbox = direct.getbox nuts.setbox = direct.setbox @@ -648,3 +648,55 @@ nuts.untracedslide = untracedslide nuts.nestedtracedslide = nestedtracedslide -- nuts.slide = tracedslide + +-- this might move + +local propertydata = direct.get_properties_table and direct.get_properties_table() + +local getattr = nuts.getattr +local setattr = nuts.setattr + +if propertydata then + + nodes.properties = { + data = propertydata, + } + + direct.set_properties_mode(true,false) + -- direct.set_properties_mode(true,true) + + -- experimental code with respect to copying attributes has been removed + -- as it doesn't pay of (most attributes are only accessed once anyway) + + nuts.getprop = function(n,k) + local p = propertydata[n] + if p then + return p[k] + end + end + + nuts.setprop = function(n,k,v) + if v then + local p = propertydata[n] + if p then + p[k] = v + else + propertydata[n] = { [k] = v } + end + end + end + + nodes.setprop = nodes.setproperty + nodes.getprop = nodes.getproperty + +else + + -- for testing and simple cases + + nuts.getprop = getattr + nuts.setprop = setattr + + nodes.setprop = getattr + nodes.getprop = setattr + +end diff --git a/tex/context/base/node-ppt.lua b/tex/context/base/node-ppt.lua new file mode 100644 index 000000000..c8cba8566 --- /dev/null +++ b/tex/context/base/node-ppt.lua @@ -0,0 +1,476 @@ +if not modules then modules = { } end modules ['node-ppt'] = { + version = 1.001, + comment = "companion to node-ini.mkiv", + author = "Hans Hagen, PRAGMA-ADE, Hasselt NL", + copyright = "PRAGMA ADE / ConTeXt Development Team", + license = "see context related readme files" +} + +-- This is all very exeperimental and likely to change. + +local next, type, unpack, load = next, type, table.unpack, load + +local serialize = table.serialize +local formatters = string.formatters + +local report = logs.reporter("properties") +local report_setting = logs.reporter("properties","setting") +local trace_setting = false trackers.register("properties.setting", function(v) trace_setting = v end) + +-- report("using experimental properties") + +local nuts = nodes.nuts +local tonut = nuts.tonut +local tonode = nuts.tonode +local getid = nuts.getid +local getnext = nuts.getnext +local getprev = nuts.getprev +local getsubtype = nuts.getsubtype +local getfield = nuts.getfield +local setfield = nuts.setfield +local getlist = nuts.getlist +local flushnode = nuts.flush +local removenode = nuts.remove +local traverse = nuts.traverse +local traverse_id = nuts.traverse_id + +local nodecodes = nodes.nodecodes +local whatsitcodes = nodes.whatsitcodes + +local whatsit_code = nodecodes.whatsit +local hlist_code = nodecodes.hlist +local vlist_code = nodecodes.vlist +local userdefined_code = whatsitcodes.userdefined +local localpar_code = whatsitcodes.localpar + +local nodepool = nodes.pool +local new_usernumber = nodepool.usernumber + +local nutpool = nuts.pool +local nut_usernumber = nutpool.usernumber + +local variables = interfaces.variables +local v_before = variables.before +local v_after = variables.after +local v_here = variables.here + +local cache = { } +local nofslots = 0 +local property_id = nodepool.userids["property"] + +local properties = nodes.properties if not properties then return end -- temp +local propertydata = properties.data + +local starttiming = statistics.starttiming +local stoptiming = statistics.stoptiming + +if not propertydata then + return +end + +-- management + +local function register(where,data,...) + if not data then + data = where + where = v_after + end + if data then + local data = { where, data, ... } + nofslots = nofslots + 1 + if nofslots > 1 then + cache[nofslots] = data + else + -- report("restarting attacher") + cache = { data } -- also forces collection + end + return new_usernumber(property_id,nofslots) + end +end + +local writenode = node.write +local flushnode = context.flushnode + +function commands.deferredproperty(...) +-- context(register(...)) + flushnode(register(...)) +end + + +function commands.immediateproperty(...) + writenode(register(...)) +end + +commands.attachproperty = commands.deferredproperty + +local actions = { } properties.actions = actions + +table.setmetatableindex(actions,function(t,k) + report("unknown property action %a",k) + local v = function() end + return v +end) + +local f_delayed = formatters["return function(target,head,where,propdata,parent) %s end"] +local f_immediate = formatters["return function(target,head,where,propdata) %s end"] + +local nofdelayed = 0 -- better is to keep track of it per page ... we can have deleted nodes with properties + +function actions.delayed(target,head,where,propdata,code,...) -- this one is used at the tex end +-- local kind = type(code) +-- if kind == "string" then +-- code, err = load(f_delayed(code)) +-- if code then +-- code = code() +-- end +-- elseif kind ~= "function" then +-- code = nil +-- end + if code then + local delayed = propdata.delayed + if delayed then + delayed[#delayed+1] = { where, code, ... } + else + propdata.delayed = { { where, code, ... } } + nofdelayed = nofdelayed + 1 + end + end +end + +function actions.fdelayed(target,head,where,propdata,code,...) -- this one is used at the tex end +-- local kind = type(code) +-- if kind == "string" then +-- code, err = load(f_delayed(code)) +-- if code then +-- code = code() +-- end +-- elseif kind ~= "function" then +-- code = nil +-- end + if code then + local delayed = propdata.delayed + if delayed then + delayed[#delayed+1] = { false, code, ... } + else + propdata.delayed = { { false, code, ... } } + nofdelayed = nofdelayed + 1 + end + end +end + +function actions.immediate(target,head,where,propdata,code,...) -- this one is used at the tex end + local kind = type(code) + if kind == "string" then + local f = f_immediate(code) + local okay, err = load(f) + if okay then + local h = okay()(target,head,where,propdata,...) + if h and h ~= head then + return h + end + end + elseif kind == "function" then + local h = code()(target,head,where,propdata,...) + if h and h ~= head then + return h + end + end +end + +-- another experiment (a table or function closure are equally efficient); a function +-- is easier when we want to experiment with different (compatible) implementations + +-- function nodes.nuts.pool.deferredfunction(...) +-- nofdelayed = nofdelayed + 1 +-- local n = nut_usernumber(property_id,0) +-- propertydata[n] = { deferred = { ... } } +-- return n +-- end + +-- function nodes.nuts.pool.deferredfunction(f) +-- nofdelayed = nofdelayed + 1 +-- local n = nut_usernumber(property_id,0) +-- propertydata[n] = { deferred = f } +-- return n +-- end + +-- maybe actions will get parent too + +local function delayed(head,parent) -- direct based + for target in traverse(head) do + local p = propertydata[target] + if p then + -- local deferred = p.deferred -- kind of late lua (but too soon as we have no access to pdf.h/v) + -- if deferred then + -- -- if #deferred > 0 then + -- -- deferred[1](unpack(deferred,2)) + -- -- else + -- -- deferred[1]() + -- -- end + -- deferred() + -- p.deferred = false + -- if nofdelayed == 1 then + -- nofdelayed = 0 + -- return head + -- else + -- nofdelayed = nofdelayed - 1 + -- end + -- else + local delayed = p.delayed + if delayed then + for i=1,#delayed do + local d = delayed[i] + local code = d[2] + local kind = type(code) + if kind == "string" then + code, err = load(f_delayed(code)) + if code then + code = code() + end + end + local where = d[1] + if where then + local h = code(target,where,head,p,parent,unpack(d,3)) -- target where propdata head parent + if h and h ~= head then + head = h + end + else + code(unpack(d,3)) + end + end + p.delayed = nil + if nofdelayed == 1 then + nofdelayed = 0 + return head + else + nofdelayed = nofdelayed - 1 + end + end + -- end + end + local id = getid(target) + if id == hlist_code or id == vlist_code then + local list = getlist(target) + if list then + local done = delayed(list,parent) + if done then + setfield(target,"list",done) + end + if nofdelayed == 0 then + return head + end + end + else + -- maybe also some more lists? but we will only use this for some + -- special cases .. who knows + end + end + return head +end + +function properties.delayed(head) -- + if nofdelayed > 0 then + -- if next(propertydata) then + starttiming(properties) + head = delayed(tonut(head)) + stoptiming(properties) + return tonode(head), true -- done in shipout anyway + -- else + -- delayed = 0 + -- end + end + return head, false +end + +-- more explicit ones too + +local anchored = { + [v_before] = function(n) + while n do + n = getprev(n) + if getid(n) == whatsit_code and getsubtype(n) == user_code and getfield(n,"user_id") == property_id then + -- continue + else + return n + end + end + end, + [v_after] = function(n) + while n do + n = getnext(n) + if getid(n) == whatsit_code then + local subtype = getsubtype(n) + if (subtype == userdefined_code and getfield(n,"user_id") == property_id) then + -- continue + elseif subtype == localpar_code then + -- continue .. can't happen anyway as we cannot write + else + return n + end + else + return n + end + end + end, + [v_here] = function(n) + -- todo + end, +} + +table.setmetatableindex(anchored,function(t,k) + v = anchored[v_after] + t[k] = v + return v +end) + +function properties.attach(head) + + if nofslots <= 0 then + return head, false + end + + local done = false + local last = nil + local head = tonut(head) + + starttiming(properties) + + for source in traverse_id(whatsit_code,head) do + if getsubtype(source) == userdefined_code then + if last then + removenode(head,last,true) + last = nil + end + if getfield(source,"user_id") == property_id then + local slot = getfield(source,"value") + local data = cache[slot] + if data then + cache[slot] = nil + local where = data[1] + local target = anchored[where](source) + if target then + local first = data[2] + local method = type(first) + local p_target = propertydata[target] + local p_source = propertydata[source] + if p_target then + if p_source then + for k, v in next, p_source do + p_target[k] = v + end + end + if method == "table" then + for k, v in next, first do + p_target[k] = v + end + elseif method == "function" then + first(target,head,where,p_target,unpack(data,3)) + elseif method == "string" then + actions[first](target,head,where,p_target,unpack(data,3)) + end + elseif p_source then + if method == "table" then + propertydata[target] = p_source + for k, v in next, first do + p_source[k] = v + end + elseif method == "function" then + propertydata[target] = p_source + first(target,head,where,p_source,unpack(data,3)) + elseif method == "string" then + propertydata[target] = p_source + actions[first](target,head,where,p_source,unpack(data,3)) + end + else + if method == "table" then + propertydata[target] = first + elseif method == "function" then + local t = { } + propertydata[target] = t + first(target,head,where,t,unpack(data,3)) + elseif method == "string" then + local t = { } + propertydata[target] = t + actions[first](target,head,where,t,unpack(data,3)) + end + end + if trace_setting then + report_setting("node %i, id %s, data %s", + target,nodecodes[getid(target)],serialize(propertydata[target],false)) + end + end + if nofslots == 1 then + nofslots = 0 + last = source + break + else + nofslots = nofslots - 1 + end + end + last = source + end + end + end + + if last then + removenode(head,last,true) + end + + stoptiming(properties) + + return head, done + +end + +local tasks = nodes.tasks + +-- maybe better hard coded in-place + +-- tasks.prependaction("processors","before","nodes.properties.attach") +-- tasks.appendaction("shipouts","normalizers","nodes.properties.delayed") + +statistics.register("properties processing time", function() + return statistics.elapsedseconds(properties) +end) + +-- only for development + +-- local function show(head,level,report) +-- for target in traverse(head) do +-- local p = propertydata[target] +-- if p then +-- report("level %i, node %i, id %s, data %s", +-- level,target,nodecodes[getid(target)],serialize(propertydata[target],false)) +-- end +-- local id = getid(target) +-- if id == hlist_code or id == vlist_code then +-- local list = getlist(target) +-- if list then +-- show(list,level+1,report) +-- end +-- else +-- -- maybe more lists +-- end +-- end +-- return head, false +-- end +-- +-- local report_shipout = logs.reporter("properties","shipout") +-- local report_processors = logs.reporter("properties","processors") +-- +-- function properties.showshipout (head) return tonode(show(tonut(head),1,report_shipout )), true end +-- function properties.showprocessors(head) return tonode(show(tonut(head),1,report_processors)), true end +-- +-- tasks.prependaction("shipouts","before","nodes.properties.showshipout") +-- tasks.disableaction("shipouts","nodes.properties.showshipout") +-- +-- trackers.register("properties.shipout",function(v) +-- tasks.setaction("shipouts","nodes.properties.showshipout",v) +-- end) +-- +-- tasks.appendaction ("processors","after","nodes.properties.showprocessors") +-- tasks.disableaction("processors","nodes.properties.showprocessors") +-- +-- trackers.register("properties.processors",function(v) +-- tasks.setaction("processors","nodes.properties.showprocessors",v) +-- end) diff --git a/tex/context/base/node-ref.lua b/tex/context/base/node-ref.lua index 7cfbde849..c55db4ea3 100644 --- a/tex/context/base/node-ref.lua +++ b/tex/context/base/node-ref.lua @@ -123,28 +123,27 @@ local function inject_range(head,first,last,reference,make,stack,parent,pardir,t if result and resolved then if head == first then if trace_backend then - report_area("head: %04i %s %s %s => w=%p, h=%p, d=%p, c=%s",reference,pardir or "---",txtdir or "----",tosequence(first,last,true),width,height,depth,resolved) + report_area("%s: %04i %s %s %s => w=%p, h=%p, d=%p, c=%S","head", + reference,pardir or "---",txtdir or "---",tosequence(first,last,true),width,height,depth,resolved) end setfield(result,"next",first) setfield(first,"prev",result) return result, last else if trace_backend then - report_area("middle: %04i %s %s => w=%p, h=%p, d=%p, c=%s",reference,pardir or "---",txtdir or "----",tosequence(first,last,true),width,height,depth,resolved) + report_area("%s: %04i %s %s %s => w=%p, h=%p, d=%p, c=%S","middle", + reference,pardir or "---",txtdir or "---",tosequence(first,last,true),width,height,depth,resolved) end local prev = getprev(first) if prev then - setfield(result,"next",first) - setfield(result,"prev",prev) setfield(prev,"next",result) - setfield(first,"prev",result) - else - setfield(result,"next",first) - setfield(first,"prev",result) - end - if first == getnext(head) then - setfield(head,"next",result) -- hm, weird + setfield(result,"prev",prev) end + setfield(result,"next",first) + setfield(first,"prev",result) +-- if first == getnext(head) then +-- setfield(head,"next",result) -- hm, weird +-- end return head, last end else @@ -195,7 +194,8 @@ local function inject_list(id,current,reference,make,stack,pardir,txtdir) -- todo: only when width is ok if result and resolved then if trace_backend then - report_area("box: %04i %s %s: w=%p, h=%p, d=%p, c=%s",reference,pardir or "---",txtdir or "----",width,height,depth,resolved) + report_area("%s: %04i %s %s %s: w=%p, h=%p, d=%p, c=%S","box", + reference,pardir or "---",txtdir or "----","[]",width,height,depth,resolved) end if not first then setfield(current,"list",result) @@ -228,14 +228,25 @@ local function inject_areas(head,attribute,make,stack,done,skip,parent,pardir,tx local id = getid(current) if id == hlist_code or id == vlist_code then local r = getattr(current,attribute) - -- somehow reference is true so the following fails (second one not done) in - -- test \goto{test}[page(2)] test \gotobox{test}[page(2)] - -- so let's wait till this fails again - -- if not reference and r and (not skip or r > skip) then -- > or ~= - if r and (not skip or r > skip) then -- > or ~= - inject_list(id,current,r,make,stack,pardir,txtdir) - end + -- test \goto{test}[page(2)] test \gotobox{test}[page(2)] + -- test \goto{\TeX}[page(2)] test \gotobox{\hbox {x} \hbox {x}}[page(2)] + -- if r and (not skip or r >) skip then -- maybe no > test + -- inject_list(id,current,r,make,stack,pardir,txtdir) + -- end if r then + if not reference then + reference, first, last, firstdir = r, current, current, txtdir + elseif r == reference then + -- same link + last = current + elseif (done[reference] or 0) == 0 then + if not skip or r > skip then -- maybe no > test + head, current = inject_range(head,first,last,reference,make,stack,parent,pardir,firstdir) + reference, first, last, firstdir = nil, nil, nil, nil + end + else + reference, first, last, firstdir = r, current, current, txtdir + end done[r] = (done[r] or 0) + 1 end local list = getlist(current) @@ -297,7 +308,7 @@ local function inject_area(head,attribute,make,stack,done,parent,pardir,txtdir) end local list = getlist(current) if list then - setfield(current,"list",inject_area(list,attribute,make,stack,done,current,pardir,txtdir)) + setfield(current,"list",(inject_area(list,attribute,make,stack,done,current,pardir,txtdir))) end elseif id == whatsit_code then local subtype = getsubtype(current) @@ -429,6 +440,7 @@ annot = tonut(annot) end if current then setfield(current,"next",annot) + setfield(annot,"prev",current) else result = annot end @@ -503,32 +515,28 @@ local function makedestination(width,height,depth,reference) step = 4*65536 width, height, depth = 5*step, 5*step, 0 end - for n=1,#name do - local rule = hpack_list(colorize(width,height,depth,3,reference,"destination")) - setfield(rule,"width",0) - if not result then - result, current = rule, rule - else - setfield(current,"next",rule) - setfield(rule,"prev",current) - current = rule - end - width, height = width - step, height - step + local rule = hpack_list(colorize(width,height,depth,3,reference,"destination")) + setfield(rule,"width",0) + if not result then + result, current = rule, rule + else + setfield(current,"next",rule) + setfield(rule,"prev",current) + current = rule end + width, height = width - step, height - step end nofdestinations = nofdestinations + 1 - for n=1,#name do - local annot = nodeinjections.destination(width,height,depth,name[n],view) - if annot then -annot = tonut(annot) -- obsolete soon - if not result then - result = annot - else - setfield(current,"next",annot) - setfield(annot,"prev",current) - end - current = find_node_tail(annot) + local annot = nodeinjections.destination(width,height,depth,name,view) + if annot then + annot = tonut(annot) -- obsolete soon + if result then + setfield(current,"next",annot) + setfield(annot,"prev",current) + else + result = annot end + current = find_node_tail(annot) end if result then -- some internal error diff --git a/tex/context/base/node-res.lua b/tex/context/base/node-res.lua index 968283745..1a9d6f02e 100644 --- a/tex/context/base/node-res.lua +++ b/tex/context/base/node-res.lua @@ -69,11 +69,15 @@ local getbox = nuts.getbox local getfield = nuts.getfield local setfield = nuts.setfield local getid = nuts.getid +local getlist = nuts.getlist local copy_nut = nuts.copy local new_nut = nuts.new local free_nut = nuts.free +local copy_node = nodes.copy +local new_node = nodes.new + -- at some point we could have a dual set (the overhead of tonut is not much larger than -- metatable associations at the lua/c end esp if we also take assignments into account @@ -253,6 +257,19 @@ function nutpool.glue(width,stretch,shrink,stretch_order,shrink_order) return someskip(glue,width,stretch,shrink,stretch_order,shrink_order) end +function nutpool.negatedglue(glue) + local n = copy_nut(glue) + local s = copy_nut(getfield(n,"spec")) + local width = getfield(s,"width") + local stretch = getfield(s,"stretch") + local shrink = getfield(s,"shrink") + if width then setfield(s,"width", -width) end + if stretch then setfield(s,"stretch",-stretch) end + if shrink then setfield(s,"shrink", -shrink) end + setfield(n,"spec",s) + return n +end + function nutpool.leftskip(width,stretch,shrink,stretch_order,shrink_order) return someskip(leftskip,width,stretch,shrink,stretch_order,shrink_order) end @@ -288,19 +305,85 @@ function nutpool.rule(width,height,depth,dir) -- w/h/d == nil will let them adap return n end --- if node.has_field(latelua,'string') then - function nutpool.latelua(code) - local n = copy_nut(latelua) - setfield(n,"string",code) +function nutpool.latelua(code) + local n = copy_nut(latelua) + setfield(n,"string",code) + return n +end + +if context and _cldo_ then + + -- a typical case where we have more nodes than nuts + + local context = context + + local f_cldo = string.formatters["_cldo_(%i)"] + local register = context.registerfunction + + local latelua_node = register_node(new_node("whatsit",whatsitcodes.latelua)) + local latelua_nut = register_nut (new_nut ("whatsit",whatsitcodes.latelua)) + + local setfield_node = nodes.setfield + local setfield_nut = nuts .setfield + + function nodepool.lateluafunction(f) + local n = copy_node(latelua_node) + setfield_node(n,"string",f_cldo(register(f))) return n end --- else --- function nutpool.latelua(code) --- local n = copy_nut(latelua) --- setfield(n,"data",code) --- return n --- end --- end + function nutpool.lateluafunction(f) + local n = copy_nut(latelua_nut) + setfield_nut(n,"string",f_cldo(register(f))) + return n + end + + -- when function in latelua: + + -- function nodepool.lateluafunction(f) + -- local n = copy_node(latelua_node) + -- setfield_node(n,"string",f) + -- return n + -- end + -- function nutpool.lateluafunction(f) + -- local n = copy_nut(latelua_nut) + -- setfield_nut(n,"string",f) + -- return n + -- end + + local latefunction = nodepool.lateluafunction + local flushnode = context.flushnode + + function context.lateluafunction(f) + flushnode(latefunction(f)) -- hm, quite some indirect calls + end + + -- when function in latelua: + + -- function context.lateluafunction(f) + -- local n = copy_node(latelua_node) + -- setfield_node(n,"string",f) + -- flushnode(n) + -- end + + -- local contextsprint = context.sprint + -- local ctxcatcodes = tex.ctxcatcodes + -- local storenode = context.storenode + + -- when 0.79 is out: + + -- function context.lateluafunction(f) + -- contextsprint(ctxcatcodes,"\\cldl",storenode(latefunction(f))," ") + -- end + + -- when function in latelua: + + -- function context.lateluafunction(f) + -- local n = copy_node(latelua_node) + -- setfield_node(n,"string",f) + -- contextsprint(ctxcatcodes,"\\cldl",storenode(n)," ") + -- end + +end function nutpool.leftmarginkern(glyph,width) local n = copy_nut(left_margin_kern) @@ -444,6 +527,7 @@ local function cleanup(nofboxes) -- todo for i=0,nofboxes do local l = getbox(i) if l then +-- print(nodes.listtoutf(getlist(l))) free_nut(l) -- also list ? nl = nl + 1 end diff --git a/tex/context/base/pack-com.mkiv b/tex/context/base/pack-com.mkiv index 2c28d6b20..4ca77af1c 100644 --- a/tex/context/base/pack-com.mkiv +++ b/tex/context/base/pack-com.mkiv @@ -626,12 +626,12 @@ \unexpanded\def\placepairedbox[#1]% {\bgroup \edef\currentpairedbox{#1}% - \doifnextoptionalelse\pack_pairedboxes_place\pack_pairedboxes_place_indeed} + \doifnextoptionalcselse\pack_pairedboxes_place\pack_pairedboxes_place_indeed} \unexpanded\def\startplacepairedbox[#1]% {\bgroup \edef\currentpairedbox{#1}% - \doifnextoptionalelse\pack_pairedboxes_place\pack_pairedboxes_place_indeed} + \doifnextoptionalcselse\pack_pairedboxes_place\pack_pairedboxes_place_indeed} \unexpanded\def\stopplacepairedbox {} diff --git a/tex/context/base/pack-mis.mkvi b/tex/context/base/pack-mis.mkvi index 978cc120c..38fcc18e4 100644 --- a/tex/context/base/pack-mis.mkvi +++ b/tex/context/base/pack-mis.mkvi @@ -46,7 +46,7 @@ \unexpanded\def\pack_placement#tag% {\bgroup \edef\currentplacement{#tag}% - \doifnextoptionalelse\pack_placement_yes\pack_placement_nop} + \doifnextoptionalcselse\pack_placement_yes\pack_placement_nop} \def\pack_placement_yes[#settings]% {\setupcurrentplacement[#settings]% diff --git a/tex/context/base/pack-mrl.mkiv b/tex/context/base/pack-mrl.mkiv index 7c3f08825..3e81a4d69 100644 --- a/tex/context/base/pack-mrl.mkiv +++ b/tex/context/base/pack-mrl.mkiv @@ -40,7 +40,7 @@ \unexpanded\def\blackrule {\hbox\bgroup - \doifnextoptionalelse\pack_black_rule_pickup\pack_black_rule_indeed} + \doifnextoptionalcselse\pack_black_rule_pickup\pack_black_rule_indeed} \def\pack_black_rule_pickup[#1]% {\setupcurrentblackrules[#1]% @@ -96,7 +96,7 @@ \unexpanded\def\blackrules % probably never used {\hbox\bgroup - \doifnextoptionalelse\pack_black_rules_pickup\pack_black_rules_indeed} + \doifnextoptionalcselse\pack_black_rules_pickup\pack_black_rules_indeed} \def\pack_black_rules_pickup[#1]% {\setupcurrentblackrules[#1]% diff --git a/tex/context/base/pack-rul.lua b/tex/context/base/pack-rul.lua index c8ed0722b..5796da800 100644 --- a/tex/context/base/pack-rul.lua +++ b/tex/context/base/pack-rul.lua @@ -33,8 +33,6 @@ local getsubtype = nuts.getsubtype local getbox = nuts.getbox local hpack = nuts.hpack -local free = nuts.free -local copy = nuts.copy_list local traverse_id = nuts.traverse_id local node_dimensions = nuts.dimensions diff --git a/tex/context/base/page-flt.lua b/tex/context/base/page-flt.lua index 11aa2be21..7b1afc55c 100644 --- a/tex/context/base/page-flt.lua +++ b/tex/context/base/page-flt.lua @@ -21,20 +21,23 @@ local C, S, P, lpegmatch = lpeg.C, lpeg.S, lpeg.P, lpeg.match -- we use floatbox, floatwidth, floatheight -- text page leftpage rightpage (todo: top, bottom, margin, order) -local copy_node_list = node.copy_list +local copy_node_list = node.copy_list +local flush_node_list = node.flush_list +local copy_node = node.copy -local setdimen = tex.setdimen -local setcount = tex.setcount -local texgetbox = tex.getbox -local texsetbox = tex.setbox +local setdimen = tex.setdimen +local setcount = tex.setcount +local texgetbox = tex.getbox +local texsetbox = tex.setbox +local textakebox = nodes.takebox -floats = floats or { } -local floats = floats +floats = floats or { } +local floats = floats -local noffloats = 0 -local last = nil -local default = "text" -local pushed = { } +local noffloats = 0 +local last = nil +local default = "text" +local pushed = { } local function initialize() return { @@ -105,21 +108,20 @@ end function floats.save(which,data) which = which or default - local b = texgetbox("floatbox") + local b = textakebox("floatbox") if b then local stack = stacks[which] noffloats = noffloats + 1 - local w, h, d = b.width, b.height, b.depth local t = { n = noffloats, data = data or { }, - box = copy_node_list(b), + box = b, } - texsetbox("floatbox",nil) insert(stack,t) setcount("global","savednoffloats",#stacks[default]) if trace_floats then - report_floats("%s, category %a, number %a, slot %a, width %p, height %p, depth %p","saving",which,noffloats,#stack,w,h,d) + report_floats("%s, category %a, number %a, slot %a, width %p, height %p, depth %p","saving", + which,noffloats,#stack,b.width,b.height,b.depth) else interfaces.showmessage("floatblocks",2,noffloats) end @@ -132,14 +134,13 @@ function floats.resave(which) if last then which = which or default local stack = stacks[which] - local b = texgetbox("floatbox") - local w, h, d = b.width, b.height, b.depth - last.box = copy_node_list(b) - texsetbox("floatbox",nil) + local b = textakebox("floatbox") + last.box = b insert(stack,1,last) setcount("global","savednoffloats",#stacks[default]) if trace_floats then - report_floats("%s, category %a, number %a, slot %a width %p, height %p, depth %p","resaving",which,noffloats,#stack,w,h,d) + report_floats("%s, category %a, number %a, slot %a width %p, height %p, depth %p","resaving", + which,noffloats,#stack,b.width,b.height,b.depth) else interfaces.showmessage("floatblocks",2,noffloats) end @@ -153,9 +154,10 @@ function floats.flush(which,n,bylabel) local stack = stacks[which] local t, b, n = get(stack,n or 1,bylabel) if t then - local w, h, d = setdimensions(b) if trace_floats then - report_floats("%s, category %a, number %a, slot %a width %p, height %p, depth %p","flushing",which,t.n,n,w,h,d) + local w, h, d = setdimensions(b) -- ? + report_floats("%s, category %a, number %a, slot %a width %p, height %p, depth %p","flushing", + which,t.n,n,w,h,d) else interfaces.showmessage("floatblocks",3,t.n) end @@ -173,9 +175,10 @@ function floats.consult(which,n) local stack = stacks[which] local t, b, n = get(stack,n) if t then - local w, h, d = setdimensions(b) if trace_floats then - report_floats("%s, category %a, number %a, slot %a width %p, height %p, depth %p","consulting",which,t.n,n,w,h,d) + local w, h, d = setdimensions(b) + report_floats("%s, category %a, number %a, slot %a width %p, height %p, depth %p","consulting", + which,t.n,n,w,h,d) end return t, b, n else @@ -270,16 +273,16 @@ end -- interface -local context = context -local setvalue = context.setvalue +local context = context +local context_setvalue = context.setvalue -commands.flushfloat = floats.flush -commands.savefloat = floats.save -commands.resavefloat = floats.resave -commands.pushfloat = floats.push -commands.popfloat = floats.pop -commands.consultfloat = floats.consult -commands.collectfloat = floats.collect +commands.flushfloat = floats.flush +commands.savefloat = floats.save +commands.resavefloat = floats.resave +commands.pushfloat = floats.push +commands.popfloat = floats.pop +commands.consultfloat = floats.consult +commands.collectfloat = floats.collect function commands.getfloatvariable (...) local v = floats.getvariable(...) if v then context(v) end end function commands.checkedpagefloat (...) local v = floats.checkedpagefloat(...) if v then context(v) end end @@ -289,8 +292,8 @@ function commands.doifelsesavedfloat(...) commands.doifelse(floats.nofstacked(.. function commands.analysefloatmethod(str) -- currently only one method local method, label, row, column = floats.analysemethod(str) - setvalue("floatmethod",method or "") - setvalue("floatlabel", label or "") - setvalue("floatrow", row or "") - setvalue("floatcolumn",column or "") + context_setvalue("floatmethod",method or "") + context_setvalue("floatlabel", label or "") + context_setvalue("floatrow", row or "") + context_setvalue("floatcolumn",column or "") end diff --git a/tex/context/base/page-imp.mkiv b/tex/context/base/page-imp.mkiv index cfa535ab2..230ede570 100644 --- a/tex/context/base/page-imp.mkiv +++ b/tex/context/base/page-imp.mkiv @@ -41,7 +41,7 @@ \prependtoks \page_shipouts_flush_text_data \to \everylastshipout - + % Problem: we need to apply the finalizers to a to be shipped out page (as % we can have positioning involved). However, we can also add stuff in the % imposition, like cropmarks. Fortunately we do that with metapost so diff --git a/tex/context/base/page-lin.lua b/tex/context/base/page-lin.lua index 66b7e4684..0b241240c 100644 --- a/tex/context/base/page-lin.lua +++ b/tex/context/base/page-lin.lua @@ -39,10 +39,16 @@ local v_page = variables.page local v_no = variables.no local nodecodes = nodes.nodecodes +local skipcodes = nodes.skipcodes +local whatcodes = nodes.whatcodes local hlist_code = nodecodes.hlist local vlist_code = nodecodes.vlist local whatsit_code = nodecodes.whatsit +local glue_code = nodecodes.glue +local glyph_code = nodecodes.glyph +local leftskip_code = skipcodes.leftskip +local textdir_code = whatcodes.dir local a_displaymath = attributes.private('displaymath') local a_linenumber = attributes.private('linenumber') @@ -56,6 +62,7 @@ local chunksize = 250 -- not used in boxed local nuts = nodes.nuts local getid = nuts.getid +local getsubtype = nuts.getsubtype local getnext = nuts.getnext local getattr = nuts.getattr local getlist = nuts.getlist @@ -73,6 +80,12 @@ local insert_node_before = nuts.insert_before local is_display_math = nuts.is_display_math local leftmarginwidth = nuts.leftmarginwidth +local negated_glue = nuts.pool.negatedglue +local new_hlist = nuts.pool.hlist + +local ctx_convertnumber = context.convertnumber +local ctx_makelinenumber = context.makelinenumber + -- cross referencing function lines.number(n) @@ -122,7 +135,7 @@ filters.line = filters.line or { } function filters.line.default(data) -- helpers.title(data.entries.linenumber or "?",data.metadata) - context.convertnumber(data.entries.conversion or "numbers",data.entries.linenumber or "0") + ctx_convertnumber(data.entries.conversion or "numbers",data.entries.linenumber or "0") end function filters.line.page(data,prefixspec,pagespec) -- redundant @@ -195,7 +208,7 @@ local function check_number(n,a,skip,sameline) report_lines("skipping line number %s for setup %a: %s (%s)",#current_list,a,s,d.continue or v_no) end end - context.makelinenumber(tag,skipflag,s,getfield(n,"shift"),getfield(n,"width"),leftmarginwidth(getlist(n)),getfield(n,"dir")) + ctx_makelinenumber(tag,skipflag,s,getfield(n,"shift"),getfield(n,"width"),leftmarginwidth(getlist(n)),getfield(n,"dir")) end end @@ -206,17 +219,18 @@ end local function identify(list) if list then for n in traverse_id(hlist_code,list) do - if getattr(n,a_linenumber) then - return list + local a = getattr(n,a_linenumber) + if a then + return list, a end end local n = list while n do local id = getid(n) if id == hlist_code or id == vlist_code then - local ok = identify(getlist(n)) + local ok, a = identify(getlist(n)) if ok then - return ok + return ok, a end end n = getnext(n) @@ -236,65 +250,141 @@ function boxed.stage_one(n,nested) current_list = { } local box = getbox(n) if box then - local list = getlist(box) - if nested then - list = identify(list) + local found = nil + local list = getlist(box) + if list and nested then + list, found = identify(list) end - local last_a, last_v, skip = nil, -1, false - for n in traverse_id(hlist_code,list) do -- attr test here and quit as soon as zero found - if getfield(n,"height") == 0 and getfield(n,"depth") == 0 then - -- skip funny hlists -- todo: check line subtype - else - local list = getlist(n) - local a = getattr(list,a_linenumber) - if a and a > 0 then - if last_a ~= a then - local da = data[a] - local ma = da.method - if ma == v_next then - skip = true - elseif ma == v_page then - da.start = 1 -- eventually we will have a normal counter - end - last_a = a - if trace_numbers then - report_lines("starting line number range %s: start %s, continue %s",a,da.start,da.continue or v_no) + if list then + local last_a, last_v, skip = nil, -1, false + for n in traverse_id(hlist_code,list) do -- attr test here and quit as soon as zero found + if getfield(n,"height") == 0 and getfield(n,"depth") == 0 then + -- skip funny hlists -- todo: check line subtype + else + local list = getlist(n) + local a = getattr(list,a_linenumber) + if not a or a == 0 then + local n = getnext(list) + while n do + local id = getid(n) + if id == whatsit_code and getsubtype(n) == textdir_code then + n = getnext(n) + elseif id == glue_code and getsubtype(n) == leftskip_code then + n = getnext(n) + else +if id == glyph_code then + break +else + -- can be hlist or skip (e.g. footnote line) + n = getnext(n) +end + end end + a = n and getattr(n,a_linenumber) end - if getattr(n,a_displaymath) then - if is_display_math(n) then - check_number(n,a,skip) + if a and a > 0 then + if last_a ~= a then + local da = data[a] + local ma = da.method + if ma == v_next then + skip = true + elseif ma == v_page then + da.start = 1 -- eventually we will have a normal counter + end + last_a = a + if trace_numbers then + report_lines("starting line number range %s: start %s, continue %s",a,da.start,da.continue or v_no) + end end - else - local v = getattr(list,a_verbatimline) - if not v or v ~= last_v then - last_v = v - check_number(n,a,skip) + if getattr(n,a_displaymath) then + if is_display_math(n) then + check_number(n,a,skip) + end else - check_number(n,a,skip,true) + local v = getattr(list,a_verbatimline) + if not v or v ~= last_v then + last_v = v + check_number(n,a,skip) + else + check_number(n,a,skip,true) + end end + skip = false end - skip = false end end end end end +-- [dir][leftskip][content] + +function boxed.stage_two(n,m) + if #current_list > 0 then + m = m or lines.scratchbox + local t, tn = { }, 0 + for l in traverse_id(hlist_code,getlist(getbox(m))) do + tn = tn + 1 + t[tn] = copy_node(l) -- use take_box instead + end + for i=1,#current_list do + local li = current_list[i] + local n, m, ti = li[1], li[2], t[i] + if ti then + local l = getlist(n) + -- we want to keep leftskip at the start +-- local id = getid(l) +-- if id == whatsit_code and getsubtype(l) == textdir_code then +-- l = getnext(l) +-- id = getid(l) +-- end +-- if getid(l) == glue_code and getsubtype(l) == leftskip_code then +-- -- [leftskip] [number] [rest] +-- local forward = copy_node(l) +-- local backward = negated_glue(l) +-- local next = getnext(l) +-- setfield(l,"next",backward) +-- setfield(backward,"prev",l) +-- setfield(backward,"next",ti) +-- setfield(ti,"prev",backward) +-- setfield(ti,"next",forward) +-- setfield(forward,"prev",ti) +-- setfield(forward,"next",next) +-- setfield(next,"prev",forward) +-- else + -- [number] [rest] + setfield(ti,"next",l) + setfield(l,"prev",ti) + setfield(n,"list",ti) +-- end + resolve(n,m) + else + report_lines("error in linenumbering (1)") + return + end + end + end +end + function boxed.stage_two(n,m) if #current_list > 0 then m = m or lines.scratchbox local t, tn = { }, 0 for l in traverse_id(hlist_code,getlist(getbox(m))) do tn = tn + 1 - t[tn] = copy_node(l) + t[tn] = copy_node(l) -- use take_box instead end for i=1,#current_list do local li = current_list[i] local n, m, ti = li[1], li[2], t[i] if ti then - setfield(ti,"next",getlist(n)) - setfield(n,"list",ti) + local l = getlist(n) + setfield(ti,"next",l) + setfield(l,"prev",ti) + local h = copy_node(n) + setfield(h,"dir","TLT") + setfield(h,"list",ti) + setfield(n,"list",h) resolve(n,m) else report_lines("error in linenumbering (1)") diff --git a/tex/context/base/page-lin.mkiv b/tex/context/base/page-lin.mkvi index ae293091c..e3b628487 100644 --- a/tex/context/base/page-lin.mkiv +++ b/tex/context/base/page-lin.mkvi @@ -15,6 +15,7 @@ % get rid of \mk* (left over from experimental times) % % to be redone (was experiment) .. can be hooked into margin code +% reshuffle arguments \writestatus{loading}{ConTeXt Core Macros / Line Numbering} @@ -61,14 +62,16 @@ \installcorenamespace{linenumberinginstance} +% tag skipflag s getfield(n,"shift") getfield(n,"width") leftmarginwidth(getlist(n)) getfield(n,"dir")) + \let\makelinenumber\gobblesevenarguments % used at lua end \newconditional\page_postprocessors_needed_box -\unexpanded\def\page_postprocessors_linenumbers_page #1{\page_lines_add_numbers_to_box{#1}\plusone \plusone \zerocount} -\unexpanded\def\page_postprocessors_linenumbers_box #1{\page_lines_add_numbers_to_box{#1}\plusone \plusone \zerocount} -\unexpanded\def\page_postprocessors_linenumbers_deepbox#1{\page_lines_add_numbers_to_box{#1}\plusone \plusone \plusone } -\unexpanded\def\page_postprocessors_linenumbers_column #1{\page_lines_add_numbers_to_box{#1}\currentcolumn\nofcolumns\zerocount} +\unexpanded\def\page_postprocessors_linenumbers_page #tag{\page_lines_add_numbers_to_box{#tag}\plusone \plusone \zerocount} +\unexpanded\def\page_postprocessors_linenumbers_box #tag{\page_lines_add_numbers_to_box{#tag}\plusone \plusone \zerocount} +\unexpanded\def\page_postprocessors_linenumbers_deepbox#tag{\page_lines_add_numbers_to_box{#tag}\plusone \plusone \plusone } +\unexpanded\def\page_postprocessors_linenumbers_column #tag{\page_lines_add_numbers_to_box{#tag}\currentcolumn\nofcolumns\zerocount} \def\page_lines_parameters_regular {continue = "\ifnum\c_page_lines_mode=\zerocount\v!yes\else\v!no\fi", @@ -143,47 +146,20 @@ \c!step=1, \c!method=\v!first, \c!continue=\v!no, - \c!location=\v!left, \c!style=, \c!color=, \c!width=2\emwidth, \c!left=, \c!right=, \c!command=, + \c!margin=2.5\emwidth, \c!distance=\zeropoint, + \c!location=\v!default, % depends on direction, columns etc \c!align=\v!auto] \definelinenumbering [] -% no intermediate changes in values, define a class, otherwise each range -% would need a number - -% todo: text - -\installcorenamespace{linenumberinglocation} -\installcorenamespace{linenumberingalternative} - -\expandafter\let\csname\??linenumberinglocation\v!middle \endcsname \zerocount -\expandafter\let\csname\??linenumberinglocation\v!left \endcsname \plusone -\expandafter\let\csname\??linenumberinglocation\v!margin \endcsname \plusone -\expandafter\let\csname\??linenumberinglocation\v!inmargin \endcsname \plusone -\expandafter\let\csname\??linenumberinglocation\v!inleft \endcsname \plusone -\expandafter\let\csname\??linenumberinglocation\v!right \endcsname \plustwo -\expandafter\let\csname\??linenumberinglocation\v!inright \endcsname \plustwo -\expandafter\let\csname\??linenumberinglocation\v!inner \endcsname \plusthree -\expandafter\let\csname\??linenumberinglocation\v!outer \endcsname \plusfour -\expandafter\let\csname\??linenumberinglocation\v!text \endcsname \plusfive -\expandafter\let\csname\??linenumberinglocation\v!begin \endcsname \plussix -\expandafter\let\csname\??linenumberinglocation\v!end \endcsname \plusseven - -\expandafter\let\csname\??linenumberingalternative\v!middle \endcsname \zerocount -\expandafter\let\csname\??linenumberingalternative\v!right \endcsname \plusone -\expandafter\let\csname\??linenumberingalternative\v!flushleft \endcsname \plusone -\expandafter\let\csname\??linenumberingalternative\v!left \endcsname \plustwo -\expandafter\let\csname\??linenumberingalternative\v!flushright\endcsname \plustwo -\expandafter\let\csname\??linenumberingalternative\v!auto \endcsname \plusfive - % \startlinenumbering[<startvalue>|continue|settings|name] % \startlinenumbering[name][<startvalue>|continue|settings] @@ -254,8 +230,30 @@ \fi \page_lines_start_followup} +\newconditional\c_page_lines_auto_narrow + \def\page_lines_start_followup {\numberinglinestrue + \edef\p_location{\linenumberingparameter\c!location}% + \setfalse\c_page_lines_auto_narrow + \ifhmode \else + \ifx\p_location\v!text + \ifdim\leftskip>\zeropoint \else + \advance\leftskip\linenumberingparameter\c!margin + \settrue\c_page_lines_auto_narrow + \fi + \else\ifx\p_location\v!begin + \ifdim\leftskip>\zeropoint \else + \advance\leftskip\linenumberingparameter\c!margin + \settrue\c_page_lines_auto_narrow + \fi + \else\ifx\p_location\v!end + \ifdim\leftskip>\zeropoint \else + \advance\rightskip\linenumberingparameter\c!margin + \settrue\c_page_lines_auto_narrow + \fi + \fi\fi\fi + \fi \the\beforeeverylinenumbering \globallet\page_postprocessors_page \page_postprocessors_linenumbers_page \globallet\page_postprocessors_column\page_postprocessors_linenumbers_column @@ -265,67 +263,39 @@ \or \page_lines_start_define % only when assignment \fi - \attribute\linenumberattribute\getvalue{\??linenumberinginstance\currentlinenumbering}\relax} + \attribute\linenumberattribute\csname\??linenumberinginstance\currentlinenumbering\endcsname\relax} \unexpanded\def\stoplinenumbering {\attribute\linenumberattribute\attributeunsetvalue \the\aftereverylinenumbering + \ifconditional\c_page_lines_auto_narrow\par\fi \endgroup} % number placement .. will change into (the new) margin code -\def\page_lines_number_inner_indeed{\doifoddpageelse\page_lines_number_left_indeed\page_lines_number_right_indeed} -\def\page_lines_number_outer_indeed{\doifoddpageelse\page_lines_number_right_indeed\page_lines_number_left_indeed} - -\def\page_lines_number_left - {\ifcase\c_page_lines_location - \expandafter\page_lines_number_left_indeed - \or - \expandafter\page_lines_number_left_indeed - \or - \expandafter\page_lines_number_left_indeed - \or - \expandafter\page_lines_number_inner_indeed - \or - \expandafter\page_lines_number_outer_indeed - \or - \expandafter\page_lines_number_text_indeed - \or - \expandafter\page_lines_number_begin_indeed - \or - \expandafter\page_lines_number_end_indeed - \fi} - -\def\page_lines_number_right - {\ifcase\c_page_lines_location - \expandafter\page_lines_number_right_indeed - \or - \expandafter\page_lines_number_right_indeed - \or - \expandafter\page_lines_number_right_indeed - \or - \expandafter\page_lines_number_outer_indeed - \or - \expandafter\page_lines_number_inner_indeed - \or - \expandafter\page_lines_number_text_indeed - \or - \expandafter\page_lines_number_end_indeed - \or - \expandafter\page_lines_number_begin_indeed - \fi} - \newconditional\c_page_lines_fake_number \newconstant \b_page_lines_number \newconstant \c_page_lines_column \newconstant \c_page_lines_last_column +\newdimen \d_page_lines_line_width +\settrue \c_page_lines_dir_left_to_right + +\installcorenamespace{linenumberinghandler} + +\def\page_line_swap_align % can become a helper + {\ifx\p_align\v!inner \let\p_align\v!outer \else + \ifx\p_align\v!outer \let\p_align\v!inner \else + \ifx\p_align\v!flushleft \let\p_align\v!flushright\else + \ifx\p_align\v!flushright\let\p_align\v!flushleft \else + \ifx\p_align\v!left \let\p_align\v!right \else + \ifx\p_align\v!right \let\p_align\v!left \fi\fi\fi\fi\fi\fi} -\def\page_lines_add_numbers_to_box#1#2#3#4% box col max nesting +\def\page_lines_add_numbers_to_box#box#column#max#nesting% {\bgroup - \b_page_lines_number #1\relax - \c_page_lines_column #2\relax - \c_page_lines_last_column#3\relax - \c_page_lines_nesting #4\relax + \b_page_lines_number #box\relax + \c_page_lines_column #column\relax + \c_page_lines_last_column#max\relax + \c_page_lines_nesting #nesting\relax \fullrestoreglobalbodyfont \let\makelinenumber\page_lines_make_number % used at lua end \setbox\b_page_lines_scratch\vbox @@ -337,171 +307,195 @@ \let\page_lines_make_number_indeed\relax -\def\page_lines_make_number#1#2% - {\edef\currentlinenumbering{#1}% - \ifcase#2\relax - \settrue \c_page_lines_fake_number +% \def\page_lines_rlap{\ifconditional\c_page_lines_dir_left_to_right\expandafter\rlap\else\expandafter\llap\fi} +% \def\page_lines_llap{\ifconditional\c_page_lines_dir_left_to_right\expandafter\llap\else\expandafter\rlap\fi} + +\def\page_lines_add_numbers_to_box#box#column#max#nesting% + {\bgroup + \b_page_lines_number #box\relax + \c_page_lines_column #column\relax + \c_page_lines_last_column#max\relax + \c_page_lines_nesting #nesting\relax + \fullrestoreglobalbodyfont + \let\makelinenumber\page_lines_make_number % used at lua end + \setbox\b_page_lines_scratch\vbox + {\forgetall + \offinterlineskip + \ctxcommand{linenumbersstageone(\number\b_page_lines_number,\ifcase\c_page_lines_nesting false\else true\fi)}}% + \ctxcommand{linenumbersstagetwo(\number\b_page_lines_number,\number\b_page_lines_scratch)}% can move to lua code + \egroup} + +\def\page_lines_make_number#tag#mode#linenumber#shift#width#leftskip#dir% beware, one needs so compensate for this in the \hsize + {\naturalhbox to \zeropoint \bgroup + \ifcase#mode\relax + % \settrue \c_page_lines_fake_number \else - \setfalse\c_page_lines_fake_number - \fi - \c_page_lines_location \executeifdefined{\??linenumberinglocation \linenumberingparameter\c!location}\plusone \relax % left - \c_page_lines_alignment\executeifdefined{\??linenumberingalternative\linenumberingparameter\c!align }\plusfive\relax % auto - \ifcase\c_page_lines_last_column\relax - \settrue \c_page_lines_fake_number - \or - % one column - \ifcase\c_page_lines_location - \settrue \c_page_lines_fake_number - \let\page_lines_make_number_indeed\page_lines_number_fake_indeed - \or - \let\page_lines_make_number_indeed\page_lines_number_left - \or - \let\page_lines_make_number_indeed\page_lines_number_right - \or % inner - \let\page_lines_make_number_indeed\page_lines_number_inner_indeed - \or % outer - \let\page_lines_make_number_indeed\page_lines_number_outer_indeed - \or % text - \let\page_lines_make_number_indeed\page_lines_number_text_indeed + % \setfalse\c_page_lines_fake_number + \edef\currentlinenumbering{#tag}% + \def\linenumber{#linenumber}% unsafe + \d_page_lines_line_width#width\scaledpoint\relax + \d_page_lines_distance\linenumberingparameter\c!distance\relax + \edef\p_align{\linenumberingparameter\c!align}% + \edef\p_location{\linenumberingparameter\c!location}% + \ifcase\istltdir#dir\relax + \settrue \c_page_lines_dir_left_to_right + \else + \setfalse\c_page_lines_dir_left_to_right + \fi + % + % maybe we also need an option to ignore columns, so that we renumber + % once but on the other hand this assumes aligned lines + % + \ifcase\c_page_lines_last_column\relax + \settrue \c_page_lines_fake_number % why \or - \let\page_lines_make_number_indeed\page_lines_number_begin_indeed + % one column \or - \let\page_lines_make_number_indeed\page_lines_number_end_indeed + % two columns + \ifx\p_location\v!default % or just margin + \ifcase\c_page_lines_column\relax + \settrue \c_page_lines_fake_number % why + \or + % one + \let\p_location\v!left + \else + % two + \let\p_location\v!right + % can become a helper + \page_line_swap_align + \fi + \fi + \else + % too fuzzy \fi - \else\ifcase\c_page_lines_column\relax - \settrue \c_page_lines_fake_number - \or - \let\page_lines_make_number_indeed\page_lines_number_left - \ifcase\c_page_lines_location\or - \c_page_lines_location\plusone - \or - \c_page_lines_location\plustwo + \ifx\p_location\v!default + \ifconditional\c_page_lines_dir_left_to_right + \let\p_location\v!left + \else + \let\p_location\v!right + \page_line_swap_align % yes or no + \fi + \fi + % + \executeifdefined{\??linenumberinghandler\p_location}\relax + \fi + \egroup} + +\def\page_lines_number_inject#align#width% + {\edef\p_width{\linenumberingparameter\c!width}% + \ifx\p_width\v!margin + \d_page_lines_width#width% + \else + \d_page_lines_width\p_width + \fi + \relax + \ifdim\d_page_lines_width>\zeropoint +% \ifconditional\c_page_lines_dir_left_to_right\else +% \let\simplealignedbox\simplereversealignedbox +% \fi + \ifconditional\tracelinenumbering + \ruledhbox{\simplealignedbox\d_page_lines_width#align{\page_lines_number_inject_indeed}}% \else - \c_page_lines_location\plusone - \or - \c_page_lines_location\plusone - \or - \c_page_lines_location\plusone - \or - \c_page_lines_location\plusone % todo - \or - \c_page_lines_location\plusone % todo + \simplealignedbox\d_page_lines_width#align{\page_lines_number_inject_indeed}% \fi \else - \let\page_lines_make_number_indeed\page_lines_number_right - \ifcase\c_page_lines_location\or - \c_page_lines_location\plustwo - \or - \c_page_lines_location\plusone - \or - \c_page_lines_location\plustwo - \or - \c_page_lines_location\plustwo - \or - \c_page_lines_location\plustwo % todo - \or - \c_page_lines_location\plustwo % todo + \ifconditional\tracelinenumbering + \ruledhbox + \else + % \hbox \fi - \fi\fi - \page_lines_make_number_indeed{#1}} - -\let\page_lines_number_fake_indeed\gobblesixarguments % needs checking - -\def\page_lines_number_text_indeed#1#2#3#4#5#6% beware, one needs so compensate for this in the \hsize - {\hbox{\page_lines_number_construct{#1}{2}{#2}{#5}\hskip#3\scaledpoint}} - -\def\page_lines_number_left_indeed#1#2#3#4#5#6% - {\naturalhbox to \zeropoint - {\ifcase\istltdir#6\else \hskip-#4\scaledpoint \fi - \llap{\page_lines_number_construct{#1}{2}{#2}{#5}\kern#3\scaledpoint}}} - -\def\page_lines_number_right_indeed#1#2#3#4#5#6% - {\naturalhbox to \zeropoint - {\ifcase\istltdir#6\else \hskip-#4\scaledpoint \fi - \rlap{\hskip\dimexpr#4\scaledpoint+#3\scaledpoint\relax\page_lines_number_construct{#1}{1}{#2}{#5}}}} + {\page_lines_number_inject_indeed}% + \fi} -\def\page_lines_number_begin_indeed#1#2#3#4#5#6% - {\ifcase\istltdir#6\relax - \c_page_lines_location\plusone - \expandafter\page_lines_number_left_indeed - \else - \c_page_lines_location\plustwo - \expandafter\page_lines_number_left_indeed - \fi{#1}{#2}{#3}{#4}{#5}{#6}} - -\def\page_lines_number_end_indeed#1#2#3#4#5#6% - {\ifcase\istltdir#6\relax - \c_page_lines_location\plustwo - \expandafter\page_lines_number_left_indeed +\def\page_lines_number_inject_indeed + {\uselinenumberingstyleandcolor\c!style\c!color + \linenumberingparameter\c!command + {\linenumberingparameter\c!left + \convertnumber{\linenumberingparameter\c!conversion}\linenumber + \linenumberingparameter\c!right}} + +% \def\dodorlap{\hbox to \zeropoint{\box\nextbox\normalhss}\endgroup} +% \def\dodollap{\hbox to \zeropoint{\normalhss\box\nextbox}\endgroup} + +\def\page_line_handle_left#align#width#distance% + {\llap + {\page_lines_number_inject#align#width% + \kern\dimexpr#distance+\d_page_lines_distance\relax + \the\everylinenumber + \hss}} + +\def\page_line_handle_right#align#width#distance% + {\rlap + {\kern\dimexpr#distance+\d_page_lines_distance+\d_page_lines_line_width\relax + \page_lines_number_inject#align#width% + \the\everylinenumber}} + +\setuvalue{\??linenumberinghandler\v!left}% + {\page_line_handle_left\p_align\leftmarginwidth\leftmargindistance} + +\setuvalue{\??linenumberinghandler\v!right}% + {\page_line_handle_right\p_align\rightmarginwidth\rightmargindistance} + +\setuvalue{\??linenumberinghandler\v!inner}% + {\ifodd\realpageno + \ifx\p_align\v!inner + \page_line_handle_left\v!flushleft\leftmarginwidth\leftmargindistance + \else\ifx\p_align\v!outer + \page_line_handle_left\v!flushright\leftmarginwidth\leftmargindistance + \else + \page_line_handle_left\p_align\leftmarginwidth\leftmargindistance + \fi\fi \else - \c_page_lines_location\plusone - \expandafter\page_lines_number_left_indeed - \fi{#1}{#2}{#3}{#4}{#5}{#6}} + \ifx\p_align\v!inner + \page_line_handle_right\v!flushright\rightmarginwidth\rightmargindistance + \else\ifx\p_align\v!outer + \page_line_handle_right\v!flushleft\rightmarginwidth\rightmargindistance + \else + \page_line_handle_right\p_align\rightmarginwidth\rightmargindistance + \fi\fi + \fi} -\def\page_lines_number_construct#1#2#3#4% tag 1=left|2=right linenumber leftskip - {\begingroup - \def\currentlinenumbering{#1}% - \def\linenumber{#3}% unsafe - \doifelse{\linenumberingparameter\c!width}\v!margin - {\d_page_lines_width\leftmarginwidth} - {\d_page_lines_width\linenumberingparameter\c!width}% - \d_page_lines_distance\linenumberingparameter\c!distance\relax - \ifcase#2\relax\or\hskip\d_page_lines_distance\fi\relax - \ifnum\c_page_lines_location=\plusfive - \scratchdimen\dimexpr#4\scaledpoint-\d_page_lines_distance\relax - \c_page_lines_location\plusone +\setuvalue{\??linenumberinghandler\v!outer}% + {\ifodd\realpageno + \ifx\p_align\v!inner + \page_line_handle_right\v!flushleft\leftmarginwidth\leftmargindistance + \else\ifx\p_align\v!outer + \page_line_handle_right\v!flushright\leftmarginwidth\leftmargindistance + \else + \page_line_handle_right\p_align\leftmarginwidth\leftmargindistance + \fi\fi \else - \scratchdimen\zeropoint - \fi - \ifcase\c_page_lines_alignment - \c_page_lines_location\zerocount % middle - \or - \c_page_lines_location\plusone % left - \or - \c_page_lines_location\plustwo % right - \fi - \ifconditional\tracelinenumbering\ruledhbox\else\hbox\fi to \d_page_lines_width - {\ifcase\c_page_lines_location - \hss % middle - \or - % left - \or - \hss % right - \or - \doifoddpageelse\relax\hss % inner - \or - \doifoddpageelse\hss\relax % outer - \fi - \ifconditional\c_page_lines_fake_number - % we need to reserve space - \else - \uselinenumberingstyleandcolor\c!style\c!color - \linenumberingparameter\c!command - {\linenumberingparameter\c!left - \convertnumber{\linenumberingparameter\c!conversion}{#3}% - \linenumberingparameter\c!right}% - \fi - \ifcase\c_page_lines_location - \hss % middle - \or - \hss % left - \or - % right - \or - \doifoddpageelse\hss\relax % inner - \or - \doifoddpageelse\relax\hss % outer - \fi}% - \ifcase#2\relax - \hskip-\scratchdimen - \or - \hskip-\scratchdimen - \or - \hskip\dimexpr\d_page_lines_distance-\scratchdimen\relax - \fi - \relax - \the\everylinenumber - \endgroup} + \ifx\p_align\v!inner + \page_line_handle_left\v!flushright\rightmarginwidth\rightmargindistance + \else\ifx\p_align\v!outer + \page_line_handle_left\v!flushleft\rightmarginwidth\rightmargindistance + \else + \page_line_handle_left\p_align\rightmarginwidth\rightmargindistance + \fi\fi + \fi} + +\def\page_line_handle_begin#align% + {\rlap + {\kern\d_page_lines_distance + \page_lines_number_inject#align\zeropoint + \the\everylinenumber}} + +\def\page_line_handle_end#align% + {\rlap + {\kern\d_page_lines_line_width\relax + \llap + {\page_lines_number_inject#align\zeropoint + \kern\d_page_lines_distance + \the\everylinenumber}}} + +\setuvalue{\??linenumberinghandler\v!begin}{\page_line_handle_begin\p_align} +\setuvalue{\??linenumberinghandler\v!end }{\page_line_handle_end \p_align} +\setuvalue{\??linenumberinghandler\v!text }{\page_line_handle_begin\p_align} + +\setuevalue{\??linenumberinghandler\v!inleft }{\getvalue{\??linenumberinghandler\v!left }} +\setuevalue{\??linenumberinghandler\v!inmargin}{\getvalue{\??linenumberinghandler\v!left }} +\setuevalue{\??linenumberinghandler\v!margin }{\getvalue{\??linenumberinghandler\v!left }} +\setuevalue{\??linenumberinghandler\v!inright }{\getvalue{\??linenumberinghandler\v!right}} % referencing: \permithyphenation, also removes leading spaces (new per 29-11-2013) @@ -523,7 +517,6 @@ \expandafter\gobbleoneargument \fi} - \def\page_lines_reference_show_start_indeed#1% {\setbox\scratchbox\hbox{\llap {\vrule\s!width\onepoint\s!depth\strutdp\s!height.8\strutht\raise.85\strutht\hbox{\llap{\tt\txx#1}}}}% diff --git a/tex/context/base/page-mix.lua b/tex/context/base/page-mix.lua index 30a1fdccd..0fbaa4e30 100644 --- a/tex/context/base/page-mix.lua +++ b/tex/context/base/page-mix.lua @@ -295,18 +295,47 @@ local function setsplit(specification) -- a rather large function local rest = nil local lastlocked = nil local lastcurrent = nil + local lastcontent = nil local backtracked = false if trace_state then report_state("setting collector to column %s",column) end + local function unlock(penalty) + if lastlocked then + if trace_state then + report_state("penalty %s, unlocking in column %s",penalty or "-",column) + end + lastlocked = nil + end + lastcurrent = nil + lastcontent = nil + end + + local function lock(penalty,current) + if trace_state then + report_state("penalty %s, locking in column %s",penalty,column) + end + lastlocked = penalty + lastcurrent = current or lastcurrent + lastcontent = nil + end + local function backtrack(start) local current = start -- first skip over glue and penalty while current do local id = getid(current) - if id == glue_code or id == penalty_code then + if id == glue_code then + if trace_state then + report_state("backtracking over %s in column %s","glue",column) + end + current = getprev(current) + elseif id == penalty_code then + if trace_state then + report_state("backtracking over %s in column %s","penalty",column) + end current = getprev(current) else break @@ -315,13 +344,24 @@ local function setsplit(specification) -- a rather large function -- then skip over content while current do local id = getid(current) - if id == glue_code or id == penalty_code then + if id == glue_code then + if trace_state then + report_state("quitting at %s in column %s","glue",column) + end + break + elseif id == penalty_code then + if trace_state then + report_state("quitting at %s in column %s","penalty",column) + end break else current = getprev(current) end end if not current then + if trace_state then + report_state("no effective backtracking in column %s",column) + end current = start end return current @@ -338,7 +378,12 @@ local function setsplit(specification) -- a rather large function backtracked = true end lastcurrent = nil - lastlocked = nil + if lastlocked then + if trace_state then + report_state("unlocking in column %s",column) + end + lastlocked = nil + end end if head == lasthead then if trace_state then @@ -439,6 +484,9 @@ local function setsplit(specification) -- a rather large function else -- what else? ignore? treat as valid as usual? end + if lastcontent then + unlock() + end end local function process_kern(current,nxt) @@ -466,24 +514,27 @@ local function setsplit(specification) -- a rather large function local function process_rule(current,nxt) -- simple variant of h|vlist local advance = getfield(current,"height") -- + getfield(current,"depth") - local state, skipped = checked(advance+currentskips,"rule") - if trace_state then - report_state("%-7s > column %s, state %a, rule, advance %p, height %p","rule",column,state,advance,inserttotal,height) - if skipped ~= 0 then - report_state("%-7s > column %s, discarded %p","rule",column,skipped) + if advance ~= 0 then + local state, skipped = checked(advance,"rule") + if trace_state then + report_state("%-7s > column %s, state %a, rule, advance %p, height %p","rule",column,state,advance,inserttotal,height) + if skipped ~= 0 then + report_state("%-7s > column %s, discarded %p","rule",column,skipped) + end end + if state == "quit" then + return true + end + height = height + depth + skip + advance + -- if state == "next" then + -- height = height + nextskips + -- else + -- height = height + currentskips + -- end + depth = getfield(current,"depth") + skip = 0 end - if state == "quit" then - return true - end - height = height + depth + skip + advance - if state == "next" then - height = height + nextskips - else - height = height + currentskips - end - depth = getfield(current,"depth") - skip = 0 + lastcontent = current end -- okay, here we could do some badness like magic but we want something @@ -495,8 +546,7 @@ local function setsplit(specification) -- a rather large function local function process_penalty(current,nxt) local penalty = getfield(current,"penalty") if penalty == 0 then - lastlocked = nil - lastcurrent = nil + unlock(penalty) elseif penalty == forcedbreak then local needed = getattribute(current,a_checkedbreak) local proceed = not needed or needed == 0 @@ -508,8 +558,7 @@ local function setsplit(specification) -- a rather large function end end if proceed then - lastlocked = nil - lastcurrent = nil + unlock(penalty) local okay, skipped = gotonext() if okay then if trace_state then @@ -530,18 +579,15 @@ local function setsplit(specification) -- a rather large function end elseif penalty < 0 then -- we don't care too much - lastlocked = nil - lastcurrent = nil + unlock(penalty) elseif penalty >= 10000 then if not lastcurrent then - lastcurrent = current - lastlocked = penalty + lock(penalty,current) elseif penalty > lastlocked then - lastlocked = penalty + lock(penalty) end else - lastlocked = nil - lastcurrent = nil + unlock(penalty) end end @@ -582,8 +628,11 @@ local function setsplit(specification) -- a rather large function if trace_state then report_state("%-7s > column %s, height %p, depth %p, skip %p","line",column,height,depth,skip) end + lastcontent = current end +local kept = head + while current do local id = getid(current) @@ -633,14 +682,16 @@ local function setsplit(specification) -- a rather large function if not current then if trace_state then - report_state("nilling rest") + report_state("nothing left") end - rest = nil - elseif rest == lasthead then + -- needs well defined case + -- rest = nil + elseif rest == lasthead then if trace_state then - report_state("nilling rest as rest is lasthead") + report_state("rest equals lasthead") end - rest = nil + -- test case: x\index{AB} \index{AA}x \blank \placeindex + -- makes line disappear: rest = nil end if stripbottom then diff --git a/tex/context/base/page-mix.mkiv b/tex/context/base/page-mix.mkiv index d2bb38ca0..41897f6dd 100644 --- a/tex/context/base/page-mix.mkiv +++ b/tex/context/base/page-mix.mkiv @@ -75,7 +75,7 @@ \let\startmixedcolumns\relax % defined later \let\stopmixedcolumns \relax % defined later -\appendtoks +\appendtoks % could become an option \setuevalue{\e!start\currentmixedcolumns}{\startmixedcolumns[\currentmixedcolumns]}% \setuevalue{\e!stop \currentmixedcolumns}{\stopmixedcolumns}% \to \everydefinemixedcolumns @@ -500,7 +500,9 @@ \setvalue{\??mixedcolumnsstop\s!otr}% {\par \ifcase\c_page_mix_otr_nesting\or - \doif{\mixedcolumnsparameter\c!balance}\v!yes{\c_page_mix_routine\c_page_mix_routine_balance}% + \doifelse{\mixedcolumnsparameter\c!balance}\v!yes + {\c_page_mix_routine\c_page_mix_routine_balance}% + {\penalty-\plustenthousand}% weird hack, we need to trigger the otr sometimes (new per 20140306, see balancing-001.tex) \page_otr_trigger_output_routine \fi} @@ -540,6 +542,7 @@ {\ctxcommand{mixfinalize()}% \setbox\b_page_mix_collected\vbox \bgroup \ifvoid\b_page_mix_preceding \else + \page_postprocessors_linenumbers_deepbox\b_page_mix_preceding \box\b_page_mix_preceding \global\d_page_mix_preceding_height\zeropoint \nointerlineskip diff --git a/tex/context/base/page-mul.mkiv b/tex/context/base/page-mul.mkiv index 73d84fe14..0063b3311 100644 --- a/tex/context/base/page-mul.mkiv +++ b/tex/context/base/page-mul.mkiv @@ -960,7 +960,7 @@ \ifnum\c_page_mul_balance_tries>\c_page_mul_balance_tries_max\relax \showmessage\m!columns7\empty \else - \showmessage\m!columns8{\the\c_page_mul_balance_tries\space}% + \showmessage\m!columns8{\the\c_page_mul_balance_tries}% \fi \egroup} diff --git a/tex/context/base/page-run.mkiv b/tex/context/base/page-run.mkiv index dabf37252..1f2551ebc 100644 --- a/tex/context/base/page-run.mkiv +++ b/tex/context/base/page-run.mkiv @@ -79,13 +79,27 @@ local function todimen(name,unit,fmt) return number.todimen(tex.dimen[name],unit,fmt) end -function commands.showlayoutvariables(options) - - if options == "" then +local function checkedoptions(options) + if type(options) == "table" then + return options + elseif not options or options == "" then options = "pt,cm" end + options = utilities.parsers.settings_to_hash(options) + local n = 4 + for k, v in table.sortedhash(options) do + local m = tonumber(k) + if m then + n = m + end + end + options.n = n + return options +end + +function commands.showlayoutvariables(options) - local options = utilities.parsers.settings_to_hash(options) + options = checkedoptions(options) local dimensions = { "pt", "bp", "cm", "mm", "dd", "cc", "pc", "nd", "nc", "sp", "in" } @@ -215,6 +229,8 @@ end function commands.showlayout(options) + options = checkedoptions(options) + if tex.count.textlevel == 0 then commands.showlayoutvariables(options) @@ -225,7 +241,7 @@ function commands.showlayout(options) context.bgroup() context.showframe() context.setuplayout { marking = interfaces.variables.on } - for i=1,4 do + for i=1,(options.n or 4) do commands.showlayoutvariables(options) context.page() end diff --git a/tex/context/base/page-txt.mkvi b/tex/context/base/page-txt.mkvi index 240f0e00b..6d8d50028 100644 --- a/tex/context/base/page-txt.mkvi +++ b/tex/context/base/page-txt.mkvi @@ -440,12 +440,12 @@ \def\page_layouts_set_text_content[#vertical][#horizontal][#one][#two][#three]% header text middle text/text {\iffifthargument - \setvalue{\namedlayoutelementhash{#vertical:#horizontal}\executeifdefined{\??layouttextcontent\c!text:#one}\c!middletext}% + \setvalue{\namedlayoutelementhash{#vertical:#horizontal}\executeifdefined{\??layouttextcontent\v!text:#one}\c!middletext}% {\page_layouts_process_element_double \c!leftstyle \c!leftcolor \c!leftwidth {#two}% \c!rightstyle\c!rightcolor\c!rightwidth{#three}}% \else\iffourthargument - \setvalue{\namedlayoutelementhash{#vertical:#horizontal}\executeifdefined{\??layouttextcontent\c!text:#one}\c!middletext}% + \setvalue{\namedlayoutelementhash{#vertical:#horizontal}\executeifdefined{\??layouttextcontent\v!text:#one}\c!middletext}% {\page_layouts_process_element_double \c!leftstyle \c!leftcolor \c!leftwidth {#two}% \c!rightstyle\c!rightcolor\c!rightwidth{#two}}% @@ -462,16 +462,16 @@ \def\page_layouts_reset_text_content[#vertical][#horizontal][#tag]% header text middle {\edef\currentlayoutelement{#vertical:#horizontal}% \ifthirdargument - \letvalueempty{\layoutelementhash\executeifdefined{\??layouttextcontent\c!text:#tag}\c!middletext}% + \letvalueempty{\layoutelementhash\executeifdefined{\??layouttextcontent\v!text:#tag}\c!middletext}% \else\ifsecondargument \resetlayoutelementparameter\c!lefttext \resetlayoutelementparameter\c!middletext \resetlayoutelementparameter\c!righttext \fi\fi} -\letvalue{\??layouttextcontent\c!middle:\c!text}\c!middletext -\letvalue{\??layouttextcontent\c!left :\c!text}\c!lefttext -\letvalue{\??layouttextcontent\c!right :\c!text}\c!righttext +\letvalue{\??layouttextcontent\c!middle:\v!text}\c!middletext +\letvalue{\??layouttextcontent\c!left :\v!text}\c!lefttext +\letvalue{\??layouttextcontent\c!right :\v!text}\c!righttext %D The placement of a whole line is handled by the next two %D macros. These are hooked into the general purpose token diff --git a/tex/context/base/pdfr-def.mkii b/tex/context/base/pdfr-def.mkii index 7554bda9e..b3f67b93f 100644 --- a/tex/context/base/pdfr-def.mkii +++ b/tex/context/base/pdfr-def.mkii @@ -1,4 +1,4 @@ -% filename : pdfr-def.tex +% filename : pdfr-def.mkii % comment : generated by mtxrun --script chars --pdf % author : Hans Hagen, PRAGMA-ADE, Hasselt NL % copyright: PRAGMA ADE / ConTeXt Development Team diff --git a/tex/context/base/phys-dim.lua b/tex/context/base/phys-dim.lua index e40d1eabb..870cbd29b 100644 --- a/tex/context/base/phys-dim.lua +++ b/tex/context/base/phys-dim.lua @@ -39,6 +39,7 @@ if not modules then modules = { } end modules ['phys-dim'] = { -- RevPerSec = [[RPS]], -- RevPerMin = [[RPM]], +local rawset, next = rawset, next local V, P, S, R, C, Cc, Cs, matchlpeg = lpeg.V, lpeg.P, lpeg.S, lpeg.R, lpeg.C, lpeg.Cc, lpeg.Cs, lpeg.match local format, lower = string.format, string.lower local appendlpeg = lpeg.append @@ -506,20 +507,20 @@ local packaged_units = { -- rendering: -local unitsPUS = context.unitsPUS -local unitsPU = context.unitsPU -local unitsPS = context.unitsPS -local unitsP = context.unitsP -local unitsUS = context.unitsUS -local unitsU = context.unitsU -local unitsS = context.unitsS -local unitsO = context.unitsO -local unitsN = context.unitsN -local unitsC = context.unitsC -local unitsQ = context.unitsQ -local unitsNstart = context.unitsNstart -local unitsNstop = context.unitsNstop -local unitsNspace = context.unitsNspace +local ctx_unitsPUS = context.unitsPUS +local ctx_unitsPU = context.unitsPU +local ctx_unitsPS = context.unitsPS +local ctx_unitsP = context.unitsP +local ctx_unitsUS = context.unitsUS +local ctx_unitsU = context.unitsU +local ctx_unitsS = context.unitsS +local ctx_unitsO = context.unitsO +local ctx_unitsN = context.unitsN +local ctx_unitsC = context.unitsC +local ctx_unitsQ = context.unitsQ +local ctx_unitsNstart = context.unitsNstart +local ctx_unitsNstop = context.unitsNstop +local ctx_unitsNspace = context.unitsNspace local labels = languages.data.labels @@ -664,28 +665,28 @@ local function dimpus(p,u,s) if p ~= "" then if u ~= "" then if s ~= "" then - unitsPUS(p,u,s) + ctx_unitsPUS(p,u,s) else - unitsPU(p,u) + ctx_unitsPU(p,u) end elseif s ~= "" then - unitsPS(p,s) + ctx_unitsPS(p,s) else - unitsP(p) + ctx_unitsP(p) end else if u ~= "" then if s ~= "" then - unitsUS(u,s) + ctx_unitsUS(u,s) -- elseif c then - -- unitsC(u) + -- ctx_unitsC(u) else - unitsU(u) + ctx_unitsU(u) end elseif s ~= "" then - unitsS(s) + ctx_unitsS(s) else - unitsP(p) + ctx_unitsP(p) end end end @@ -699,7 +700,7 @@ local function dimop(o) report_units("operator %a",o) end if o then - unitsO(o) + ctx_unitsO(o) end end @@ -709,7 +710,7 @@ local function dimsym(s) end s = symbol_units[s] or s if s then - unitsC(s) + ctx_unitsC(s) end end @@ -719,7 +720,7 @@ local function dimpre(p) end p = packaged_units[p] or p if p then - unitsU(p) + ctx_unitsU(p) end end @@ -789,7 +790,7 @@ local function update_parsers() -- todo: don't remap utf sequences * (V("packaged") / dimpre) * V("somespace"), -- someunknown = V("somespace") - -- * (V("nospace")/unitsU) + -- * (V("nospace")/ctx_unitsU) -- * V("somespace"), -- combination = V("longprefix") * V("longunit") -- centi meter @@ -804,7 +805,7 @@ local function update_parsers() -- todo: don't remap utf sequences + (V("longsuffix") * V("combination")) / dimspu + (V("combination") * (V("shortsuffix") + V("nothing"))) / dimpus ) - * (V("qualifier") / unitsQ)^-1 + * (V("qualifier") / ctx_unitsQ)^-1 * V("somespace"), operator = V("somespace") * ((V("longoperator") + V("shortoperator")) / dimop) @@ -824,13 +825,13 @@ local function update_parsers() -- todo: don't remap utf sequences local number = Cs( P("$") * (1-P("$"))^1 * P("$") + P([[\m{]]) * (1-P("}"))^1 * P("}") + (1-R("az","AZ")-P(" "))^1 -- todo: catch { } -- not ok - ) / unitsN + ) / ctx_unitsN - local start = Cc(nil) / unitsNstart - local stop = Cc(nil) / unitsNstop - local space = Cc(nil) / unitsNspace + local start = Cc(nil) / ctx_unitsNstart + local stop = Cc(nil) / ctx_unitsNstop + local space = Cc(nil) / ctx_unitsNspace - -- todo: avoid \unitsNstart\unitsNstop (weird that it can happen .. now catched at tex end) + -- todo: avoid \ctx_unitsNstart\ctx_unitsNstop (weird that it can happen .. now catched at tex end) local p_c_combinedparser = P { "start", number = start * dleader * (p_c_dparser + number) * stop, diff --git a/tex/context/base/publ-dat.lua b/tex/context/base/publ-dat.lua index 8fce94822..b463064ca 100644 --- a/tex/context/base/publ-dat.lua +++ b/tex/context/base/publ-dat.lua @@ -382,9 +382,7 @@ end function loaders.lua(dataset,filename) -- if filename is a table we load that one dataset = datasets[dataset] - if type(dataset) == "table" then - dataset = datasets[dataset] - end + inspect(filename) local data = type(filename) == "table" and filename or table.load(filename) if data then local luadata = dataset.luadata @@ -401,13 +399,13 @@ function loaders.xml(dataset,filename) dataset = datasets[dataset] local luadata = dataset.luadata local root = xml.load(filename) - for entry in xmlcollected(root,"/bibtex/entry") do - local attributes = entry.at + for bibentry in xmlcollected(root,"/bibtex/entry") do + local attributes = bibentry.at local tag = attributes.tag local entry = { category = attributes.category } - for field in xmlcollected(entry,"/field") do + for field in xmlcollected(bibentry,"/field") do -- entry[field.at.name] = xmltext(field) entry[field.at.name] = field.dt[1] -- no cleaning yet end diff --git a/tex/context/base/publ-ini.lua b/tex/context/base/publ-ini.lua index 6bf6714da..e25c57e29 100644 --- a/tex/context/base/publ-ini.lua +++ b/tex/context/base/publ-ini.lua @@ -120,13 +120,17 @@ statistics.register("publications load time", function() end) luatex.registerstopactions(function() - logspushtarget("logfile") - logsnewline() - report("start used btx commands") - logsnewline() + local done = false local undefined = csname_id("undefined*crap") for name, dataset in sortedhash(datasets) do for command, n in sortedhash(dataset.commands) do + if not done then + logspushtarget("logfile") + logsnewline() + report("start used btx commands") + logsnewline() + done = true + end local c = csname_id(command) if c and c ~= undefined then report("%-20s %-20s % 5i %s",name,command,n,"known") @@ -140,10 +144,12 @@ luatex.registerstopactions(function() end end end - logsnewline() - report("stop used btxcommands") - logsnewline() - logspoptarget() + if done then + logsnewline() + report("stop used btx commands") + logsnewline() + logspoptarget() + end end) -- multipass, we need to sort because hashing is random per run and not per diff --git a/tex/context/base/publ-ini.mkiv b/tex/context/base/publ-ini.mkiv index 42226695c..adbf8f7fc 100644 --- a/tex/context/base/publ-ini.mkiv +++ b/tex/context/base/publ-ini.mkiv @@ -530,7 +530,7 @@ % \to \everysetupbtxlistplacement \unexpanded\def\btxflushauthor - {\doifnextoptionalelse\btx_flush_author_yes\btx_flush_author_nop} + {\doifnextoptionalcselse\btx_flush_author_yes\btx_flush_author_nop} \def\btx_flush_author_yes[#1]{\btx_flush_author{#1}} \def\btx_flush_author_nop {\btx_flush_author{\btxlistvariantparameter\c!author}} diff --git a/tex/context/base/s-abr-01.tex b/tex/context/base/s-abr-01.tex index e9ea6393b..733eebf7b 100644 --- a/tex/context/base/s-abr-01.tex +++ b/tex/context/base/s-abr-01.tex @@ -240,6 +240,7 @@ \logo [TABLE] {\TaBlE} \logo [TCPIP] {tcp/ip} \logo [TDS] {tds} % no sc te +\logo [TEI] {tei} % no sc te \logo [TETEX] {te\TeX} % no sc te \logo [TEX] {\TeX} \logo [TEXADRES] {\TeX adress} diff --git a/tex/context/base/s-inf-03.mkiv b/tex/context/base/s-inf-03.mkiv index fc654fef5..48449d690 100644 --- a/tex/context/base/s-inf-03.mkiv +++ b/tex/context/base/s-inf-03.mkiv @@ -343,6 +343,10 @@ show("global","",sameglobal.global,false,_G,builtin,"darkgreen",globals,"darkblu for k, v in table.sortedpairs(_G) do if not skipglobal[k] and not obsolete[k] and type(v) == "table" and not marked(v) then + + -- local mt = getmetatable(v) + -- print("!!!!!!!!!!",k,v,mt,mt and mt.__index) + if basiclua[k] then show(k,"basic lua",sameglobal[k],basiclua[k],v,builtin[k],"darkred", false,false,true) elseif extralua[k] then show(k,"extra lua",sameglobal[k],extralua[k],v,builtin[k],"darkred", false,false,true) elseif basictex[k] then show(k,"basic tex",sameglobal[k],basictex[k],v,builtin[k],"darkred", false,false,true) @@ -352,7 +356,6 @@ for k, v in table.sortedpairs(_G) do end end - \stopluacode \stoptext diff --git a/tex/context/base/s-math-repertoire.mkiv b/tex/context/base/s-math-repertoire.mkiv index a66d7fc6d..314d23868 100644 --- a/tex/context/base/s-math-repertoire.mkiv +++ b/tex/context/base/s-math-repertoire.mkiv @@ -418,13 +418,13 @@ \continueifinputfile{s-math-repertoire.mkiv} -\showmathcharacterssetbodyfonts{lucidanova,cambria,xits,modern,pagella,termes,bonum} +\showmathcharacterssetbodyfonts{lucidanova,cambria,xits,modern,pagella,termes,bonum,schola} \starttext \doifelse {\getdocumentargument{bodyfont}} {} { - \setupbodyfont[cambria, 12pt] + % \setupbodyfont[cambria, 12pt] % \setupbodyfont[modern, 12pt] % \setupbodyfont[lmvirtual, 12pt] % \setupbodyfont[pxvirtual, 12pt] @@ -437,6 +437,7 @@ % \setupbodyfont[lucidanova,12pt] % \setupbodyfont[pagella, 12pt] % \setupbodyfont[bonum, 12pt] + \setupbodyfont[schola, 12pt] } { diff --git a/tex/context/base/scrn-but.mkvi b/tex/context/base/scrn-but.mkvi index fd2da9e08..f8b236c52 100644 --- a/tex/context/base/scrn-but.mkvi +++ b/tex/context/base/scrn-but.mkvi @@ -217,12 +217,12 @@ {\global\settrue\c_scrn_button_skipped} \def\scrn_button_make_normal#currentparameter#inheritedframed#letparameter#setparameter#text% - {\ctxlua{structures.references.injectcurrentset(nil,nil)}% + {\ctxcommand{injectcurrentreference()}% \hbox attr \referenceattribute \lastreferenceattribute {#inheritedframed{\ignorespaces#text\removeunwantedspaces}}} \def\scrn_button_make_contrast#currentparameter#inheritedframed#letparameter#setparameter#text% - {\ctxlua{structures.references.injectcurrentset(nil,nil)}% + {\ctxcommand{injectcurrentreference()}% \hbox attr \referenceattribute \lastreferenceattribute {#setparameter\c!foregroundcolor{#currentparameter\c!contrastcolor}% #inheritedframed{\ignorespaces#text\removeunwantedspaces}}} diff --git a/tex/context/base/scrn-wid.mkvi b/tex/context/base/scrn-wid.mkvi index fad451651..8dcc7a86a 100644 --- a/tex/context/base/scrn-wid.mkvi +++ b/tex/context/base/scrn-wid.mkvi @@ -401,7 +401,7 @@ {\doifassignmentelse{#title} {\setupcurrentcomment[#title]} {\setupcurrentcomment[\c!title=#title,#settings]}% - \ctxlua{buffers.assign("\v!comment",\!!bs#text\!!es)}% todo: expansion control, but expanded by default (xml) + \ctxcommand{assignbuffer("\v!comment",\!!bs#text\!!es)}% todo: expansion control, but expanded by default (xml) \scrn_comment_inject \ignorespaces} diff --git a/tex/context/base/sort-ini.lua b/tex/context/base/sort-ini.lua index d279f1253..9ac020166 100644 --- a/tex/context/base/sort-ini.lua +++ b/tex/context/base/sort-ini.lua @@ -457,7 +457,7 @@ function sorters.strip(str) -- todo: only letters and such str = gsub(str,"\\[\"\'~^`]*","") -- \"e -- hm, too greedy str = gsub(str,"\\%S*","") -- the rest str = gsub(str,"%s","\001") -- can be option - str = gsub(str,"[%s%[%](){}%$\"\']*","") + str = gsub(str,"[%s%[%](){}%$\"\']*","") -- %s already done if digits == v_numbers then str = gsub(str,"(%d+)",numify) -- sort numbers properly end diff --git a/tex/context/base/spac-ali.mkiv b/tex/context/base/spac-ali.mkiv index cf95064a2..c13e4ca76 100644 --- a/tex/context/base/spac-ali.mkiv +++ b/tex/context/base/spac-ali.mkiv @@ -1035,16 +1035,38 @@ % \simplealignedbox{2cm}{right}{x} \installcorenamespace{alignsimple} - -\setvalue{\??alignsimple\v!right }#1{{#1\hss}} -\setvalue{\??alignsimple\v!left }#1{{\hss#1}} -\setvalue{\??alignsimple\v!flushright}#1{{\hss#1}} -\setvalue{\??alignsimple\v!flushleft }#1{{#1\hss}} -\setvalue{\??alignsimple\v!middle }#1{{\hss#1\hss}} +\installcorenamespace{alignsimplereverse} + +% todo: also handle \bgroup ... \egroup + +\unexpanded\def\spac_align_simple_left #1{{#1\hss}} +\unexpanded\def\spac_align_simple_right #1{{\hss#1}} +\unexpanded\def\spac_align_simple_middle#1{{\hss#1\hss}} + +\letvalue{\??alignsimple \v!right }\spac_align_simple_left +\letvalue{\??alignsimple \v!outer }\spac_align_simple_left % not managed! see linenumbers +\letvalue{\??alignsimple \v!flushleft }\spac_align_simple_left +\letvalue{\??alignsimple \v!left }\spac_align_simple_right +\letvalue{\??alignsimple \v!inner }\spac_align_simple_right % not managed! see linenumbers +\letvalue{\??alignsimple \v!flushright}\spac_align_simple_right +\letvalue{\??alignsimple \v!middle }\spac_align_simple_middle + +\letvalue{\??alignsimplereverse\v!right }\spac_align_simple_right +\letvalue{\??alignsimplereverse\v!outer }\spac_align_simple_right % not managed! see linenumbers +\letvalue{\??alignsimplereverse\v!flushleft }\spac_align_simple_right +\letvalue{\??alignsimplereverse\v!left }\spac_align_simple_left +\letvalue{\??alignsimplereverse\v!inner }\spac_align_simple_left % not managed! see linenumbers +\letvalue{\??alignsimplereverse\v!flushright}\spac_align_simple_left +\letvalue{\??alignsimplereverse\v!middle }\spac_align_simple_middle \unexpanded\def\simplealignedbox#1#2% {\hbox to #1\csname\??alignsimple\ifcsname\??alignsimple#2\endcsname#2\else\v!right\fi\endcsname} +\newconditional\alignsimplelefttoright \settrue\alignsimplelefttoright + +\unexpanded\def\simplereversealignedbox#1#2% + {\hbox to #1\csname\??alignsimplereverse\ifcsname\??alignsimplereverse#2\endcsname#2\else\v!left\fi\endcsname} + % \installnamespace{alignsets} % % \setvalue{\??alignsets\v!right }#1#2{\let#1\relax\let#2\hss } diff --git a/tex/context/base/spac-chr.lua b/tex/context/base/spac-chr.lua index 4122a64b6..1abba350a 100644 --- a/tex/context/base/spac-chr.lua +++ b/tex/context/base/spac-chr.lua @@ -14,6 +14,8 @@ local byte, lower = string.byte, string.lower -- to be redone: characters will become tagged spaces instead as then we keep track of -- spaceskip etc +-- todo: only setattr when export + local next = next trace_characters = false trackers.register("typesetters.characters", function(v) trace_characters = v end) diff --git a/tex/context/base/spac-hor.mkiv b/tex/context/base/spac-hor.mkiv index 4cd913290..92491ce32 100644 --- a/tex/context/base/spac-hor.mkiv +++ b/tex/context/base/spac-hor.mkiv @@ -32,7 +32,7 @@ {\doifoutervmode{\ifconditional\c_spac_indentation_indent_first\else\spac_indentation_variant_no\fi}} \unexpanded\def\setupindenting - {\doifnextoptionalelse\spac_indentation_setup_options\spac_indentation_setup_size} + {\doifnextoptionalcselse\spac_indentation_setup_options\spac_indentation_setup_size} \unexpanded\def\spac_indentation_setup_size {\assigndimension\v_spac_indentation_current\d_spac_indentation_par{1\emwidth}{1.5\emwidth}{2\emwidth}} @@ -64,24 +64,65 @@ \def\spac_indentation_set_everypar {\everypar{\checkindentation}} +% \def\spac_indentation_apply_step_one#1% +% {\ifcsname\??indentingmethod#1\endcsname +% % case two +% \else +% \edef\v_spac_indentation_current{#1}% single entry in list +% \let\normalindentation\v_spac_indentation_current +% \spac_indentation_setup_size +% \fi} +% +% \def\spac_indentation_apply_step_two#1% +% {\ifcsname\??indentingmethod#1\endcsname +% \csname\??indentingmethod#1\endcsname +% \else +% % case one +% \fi} + +% \defineindenting[whatever][yes,2cm] +% %defineindenting[whatever][yes,-2cm] +% +% \setupindenting[yes,-2em] \input ward \par +% \setupindenting[yes,2em] \input ward \par +% \setupindenting[whatever] \input ward \par + +\installcorenamespace {indentingpreset} + +\unexpanded\def\defineindenting + {\dodoubleargument\spac_indenting_define} + +\def\spac_indenting_define[#1][#2]% todo: mixes + {\setevalue{\??indentingpreset#1}{#2}} + +\def\spac_indentation_apply_step_one_nested#1% + {\expandafter\processcommacommand\expandafter[\csname\??indentingpreset#1\endcsname]\spac_indentation_apply_step_one} + +\def\spac_indentation_apply_step_two_nested#1% + {\expandafter\processcommacommand\expandafter[\csname\??indentingpreset#1\endcsname]\spac_indentation_apply_step_two} + \def\spac_indentation_apply_step_one#1% - {\ifcsname\??indentingmethod#1\endcsname + {\ifcsname\??indentingpreset#1\endcsname + \spac_indentation_apply_step_one_nested{#1}% + \else\ifcsname\??indentingmethod#1\endcsname % case two \else \edef\v_spac_indentation_current{#1}% single entry in list \let\normalindentation\v_spac_indentation_current \spac_indentation_setup_size - \fi} + \fi\fi} \def\spac_indentation_apply_step_two#1% - {\ifcsname\??indentingmethod#1\endcsname + {\ifcsname\??indentingpreset#1\endcsname + \spac_indentation_apply_step_two_nested{#1}% + \else\ifcsname\??indentingmethod#1\endcsname \csname\??indentingmethod#1\endcsname \else % case one - \fi} + \fi\fi} \unexpanded\def\indenting % kind of obsolete - {\doifnextoptionalelse\spac_indentation_setup_options\relax} + {\doifnextoptionalcselse\spac_indentation_setup_options\relax} % use \noindentation to suppress next indentation @@ -339,7 +380,7 @@ \installspacingmethod \v!broad {\nonfrenchspacing} % more depending on what punctuation \unexpanded\def\setupspacing - {\doifnextoptionalelse\spac_spacecodes_setup_yes\spac_spacecodes_setup_nop} + {\doifnextoptionalcselse\spac_spacecodes_setup_yes\spac_spacecodes_setup_nop} \def\spac_spacecodes_setup_yes[#1]% {\csname\??spacecodemethod#1\endcsname @@ -1059,7 +1100,7 @@ %D A rather unknown one: \unexpanded\def\widened % moved from cont-new - {\doifnextoptionalelse\spac_widened_yes\spac_widened_nop} + {\doifnextoptionalcselse\spac_widened_yes\spac_widened_nop} \def\spac_widened_yes[#1]#2{\hbox \s!spread #1{\hss#2\hss}} \def\spac_widened_nop #1{\hbox \s!spread \emwidth{\hss#1\hss}} diff --git a/tex/context/base/spac-ver.lua b/tex/context/base/spac-ver.lua index 7d78d6c12..3f1fd5c82 100644 --- a/tex/context/base/spac-ver.lua +++ b/tex/context/base/spac-ver.lua @@ -8,7 +8,8 @@ if not modules then modules = { } end modules ['spac-ver'] = { -- we also need to call the spacer for inserts! --- todo: directly set skips +-- todo: use lua nodes with lua data (>0.79) +-- see ** can go when 0.79 -- this code dates from the beginning and is kind of experimental; it -- will be optimized and improved soon @@ -120,8 +121,8 @@ builders.vspacing = vspacing local vspacingdata = vspacing.data or { } vspacing.data = vspacingdata -vspacingdata.snapmethods = vspacingdata.snapmethods or { } -local snapmethods = vspacingdata.snapmethods --maybe some older code can go +local snapmethods = vspacingdata.snapmethods or { } +vspacingdata.snapmethods = snapmethods storage.register("builders/vspacing/data/snapmethods", snapmethods, "builders.vspacing.data.snapmethods") @@ -535,14 +536,15 @@ local categories = allocate { [5] = 'disable', [6] = 'nowhite', [7] = 'goback', - [8] = 'together' + [8] = 'together', -- not used (?) + [9] = 'overlay', } vspacing.categories = categories function vspacing.tocategories(str) local t = { } - for s in gmatch(str,"[^, ]") do + for s in gmatch(str,"[^, ]") do -- use lpeg instead local n = tonumber(s) if n then t[categories[n]] = true @@ -553,7 +555,7 @@ function vspacing.tocategories(str) return t end -function vspacing.tocategory(str) +function vspacing.tocategory(str) -- can be optimized if type(str) == "string" then return set.tonumber(vspacing.tocategories(str)) else @@ -584,15 +586,15 @@ do -- todo: interface.variables -- This will change: just node.write and we can store the values in skips which -- then obeys grouping - local fixedblankskip = context.fixedblankskip - local flexibleblankskip = context.flexibleblankskip - local setblankcategory = context.setblankcategory - local setblankorder = context.setblankorder - local setblankpenalty = context.setblankpenalty - local setblankhandling = context.setblankhandling - local flushblankhandling = context.flushblankhandling - local addpredefinedblankskip = context.addpredefinedblankskip - local addaskedblankskip = context.addaskedblankskip + local ctx_fixedblankskip = context.fixedblankskip + local ctx_flexibleblankskip = context.flexibleblankskip + local ctx_setblankcategory = context.setblankcategory + local ctx_setblankorder = context.setblankorder + local ctx_setblankpenalty = context.setblankpenalty + ----- ctx_setblankhandling = context.setblankhandling + local ctx_flushblankhandling = context.flushblankhandling + local ctx_addpredefinedblankskip = context.addpredefinedblankskip + local ctx_addaskedblankskip = context.addaskedblankskip local function analyze(str,oldcategory) -- we could use shorter names for s in gmatch(str,"([^ ,]+)") do @@ -604,35 +606,35 @@ do -- todo: interface.variables if mk then category = analyze(mk,category) elseif keyword == k_fixed then - fixedblankskip() + ctx_fixedblankskip() elseif keyword == k_flexible then - flexibleblankskip() + ctx_flexibleblankskip() elseif keyword == k_category then local category = tonumber(detail) if category then - setblankcategory(category) + ctx_setblankcategory(category) if category ~= oldcategory then - flushblankhandling() + ctx_flushblankhandling() oldcategory = category end end elseif keyword == k_order and detail then local order = tonumber(detail) if order then - setblankorder(order) + ctx_setblankorder(order) end elseif keyword == k_penalty and detail then local penalty = tonumber(detail) if penalty then - setblankpenalty(penalty) + ctx_setblankpenalty(penalty) end else amount = tonumber(amount) or 1 local sk = skip[keyword] if sk then - addpredefinedblankskip(amount,keyword) + ctx_addpredefinedblankskip(amount,keyword) else -- no check - addaskedblankskip(amount,keyword) + ctx_addaskedblankskip(amount,keyword) end end end @@ -640,22 +642,22 @@ do -- todo: interface.variables return category end - local pushlogger = context.pushlogger - local startblankhandling = context.startblankhandling - local stopblankhandling = context.stopblankhandling - local poplogger = context.poplogger + local ctx_pushlogger = context.pushlogger + local ctx_startblankhandling = context.startblankhandling + local ctx_stopblankhandling = context.stopblankhandling + local ctx_poplogger = context.poplogger function vspacing.analyze(str) if trace_vspacing then - pushlogger(report_vspacing) - startblankhandling() + ctx_pushlogger(report_vspacing) + ctx_startblankhandling() analyze(str,1) - stopblankhandling() - poplogger() + ctx_stopblankhandling() + ctx_poplogger() else - startblankhandling() + ctx_startblankhandling() analyze(str,1) - stopblankhandling() + ctx_stopblankhandling() end end @@ -774,7 +776,8 @@ local splittopskip_code = skipcodes.splittopskip -- end local free_glue_node = free_node -local free_glue_spec = function() end -- free_node +local free_glue_spec = function() end +----- free_glue_spec = free_node -- can be enabled in in 0.73 (so for the moment we leak due to old luatex engine issues) function vspacing.snapbox(n,how) local sv = snapmethods[how] @@ -853,7 +856,16 @@ end -- penalty only works well when before skip -local discard, largest, force, penalty, add, disable, nowhite, goback, together = 0, 1, 2, 3, 4, 5, 6, 7, 8 -- move into function when upvalue 60 issue +local discard = 0 +local largest = 1 +local force = 2 +local penalty = 3 +local add = 4 +local disable = 5 +local nowhite = 6 +local goback = 7 +local together = 8 -- not used (?) +local overlay = 9 -- [whatsits][hlist][glue][glue][penalty] @@ -885,6 +897,108 @@ local function specialpenalty(start,penalty) end end +local function check_experimental_overlay(head,current) -- todo + local p = nil + local c = current + local n = nil + +setfield(head,"prev",nil) -- till we have 0.79 ** + + local function overlay(p, n, s, mvl) + local c = getprev(n) + while c and c ~= p do + local p = getprev(c) + free_node(c) + c = p + end + setfield(n,"prev",nil) + if not mvl then + setfield(p,"next",n) + end + local p_ht = getfield(p,"height") + local p_dp = getfield(p,"depth") + local n_ht = getfield(n,"height") + local delta = n_ht + s + p_dp + local k = new_kern(-delta) + if trace_vspacing then + report_vspacing("overlaying, prev height: %p, prev depth: %p, next height: %p, skips: %p, move up: %p",p_ht,p_dp,n_ht,s,delta) + end + if n_ht > p_ht then + -- we should adapt pagetotal ! (need a hook for that) + setfield(p,"height",n_ht) + end + return k + end + + while c do + local id = getid(c) + if id == glue_code or id == penalty_code or id == kern_code then + -- skip (actually, remove) + c = getnext(c) + elseif id == hlist_code then + n = c + break + else + break + end + end + if n then + -- we have a next line + c = current + while c do + local id = getid(c) + if id == glue_code or id == penalty_code then + c = getprev(c) + elseif id == hlist_code then + p = c + break + else + break + end + end + if not p then + if a_snapmethod == a_snapvbox then + -- quit, we're not on the mvl + else + -- messy + local c = tonut(texlists.page_head) + local s = 0 + while c do + local id = getid(c) + if id == glue_code then + if p then + s = s + getfield(getfield(c,"glue_spec"),"width") + end + elseif id == kern_code then + if p then + s = s + getfield(c,"kern") + end + elseif id == penalty_code then + -- skip (actually, remove) + elseif id == hlist_code then + p = c + s = 0 + else + p = nil + s = 0 + end + c = getnext(c) + end + if p and p ~= n then + local k = overlay(p,n,s,true) + insert_node_before(n,n,k) + return k, getnext(n) + end + end + elseif p ~= n then + local k = overlay(p,n,0,false ) + insert_node_after(p,p,k) + return head, getnext(n) + end + end + return remove_node(head, current, true) +end + local function collapser(head,where,what,trace,snap,a_snapmethod) -- maybe also pass tail if trace then reset_tracing(head) @@ -900,7 +1014,17 @@ local function collapser(head,where,what,trace,snap,a_snapmethod) -- maybe also if penalty_data then local p = new_penalty(penalty_data) if trace then trace_done("flushed due to " .. why,p) end +if penalty_data >= 10000 then -- or whatever threshold? + local prev = getprev(current) + if getid(prev) == glue_code then -- maybe go back more, or maybe even push back before any glue + -- tricky case: spacing/grid-007.tex: glue penalty glue + head = insert_node_before(head,prev,p) + else + head = insert_node_before(head,current,p) + end +else head = insert_node_before(head,current,p) +end end if glue_data then local spec = getfield(glue_data,"spec") @@ -1059,6 +1183,10 @@ local function collapser(head,where,what,trace,snap,a_snapmethod) -- maybe also elseif sc == discard then if trace then trace_skip("discard",sc,so,sp,current) end head, current = remove_node(head, current, true) + elseif sc == overlay then + -- todo (overlay following line over previous + if trace then trace_skip("overlay",sc,so,sp,current) end + head, current = check_experimental_overlay(head,current,a_snapmethod) elseif ignore_following then if trace then trace_skip("disabled",sc,so,sp,current) end head, current = remove_node(head, current, true) @@ -1403,3 +1531,4 @@ commands.vspacingdefine = vspacing.setmap commands.vspacingcollapse = vspacing.collapsevbox commands.vspacingsnap = vspacing.snapbox commands.resetprevdepth = vspacing.resetprevdepth +commands.definesnapmethod = vspacing.definesnapmethod diff --git a/tex/context/base/spac-ver.mkiv b/tex/context/base/spac-ver.mkiv index afa722cfe..0c84958be 100644 --- a/tex/context/base/spac-ver.mkiv +++ b/tex/context/base/spac-ver.mkiv @@ -152,10 +152,14 @@ \unexpanded\def\setupinterlinespace {\dodoubleempty\spac_linespacing_setup} +\ifdefined\setupinterlinespace_double \else + \let\setupinterlinespace_double\setup_interlinespace % for a while +\fi + \def\spac_linespacing_setup[#1][#2]% {\settrue\interlinespaceisset % reset has to be done when needed \ifsecondargument - \setup_interlinespace[#1][#2]% + \setupinterlinespace_double[#1][#2]% \else\iffirstargument \ifcsname\namedinterlinespacehash{#1}\s!parent\endcsname \edef\currentinterlinespace{#1}% @@ -330,7 +334,7 @@ \let\v_spac_whitespace_current\v!none \unexpanded\def\setupwhitespace - {\doifnextoptionalelse\spac_whitespace_setup_yes\spac_whitespace_setup_nop} + {\doifnextoptionalcselse\spac_whitespace_setup_yes\spac_whitespace_setup_nop} \def\spac_whitespace_setup_nop {\ifx\v_spac_whitespace_current\v!none\else @@ -542,6 +546,8 @@ \ignorespaces \let\spac_lines_stop_correction\spac_lines_stop_correction_yes} +% still not ok ... will move to the lua end ... needs a final solution + \unexpanded\def\spac_lines_stop_correction_yes {\removeunwantedspaces \egroup @@ -549,6 +555,11 @@ \blank[\v!white]% \snaptogrid\hbox{\box\scratchbox}% \else +\blank[\v!nowhite]% +\ifdim\parskip>\zeropoint + % too fuzzy otherwise +\else + % doesn't like whitespace \ifdim\d_spac_prevdepth<\maxdimen \unless\ifdim\d_spac_prevdepth<\zeropoint \ifdim\d_spac_prevdepth<\strutdp \relax @@ -562,6 +573,7 @@ \fi \fi \fi +\fi \ifdim\pagegoal<\maxdimen \blank[\v!white,\the\d_spac_lines_correction_before]% \blank[\v!white]\dotopbaselinecorrection \fi @@ -1154,6 +1166,10 @@ \let\normaloffinterlineskip\offinterlineskip % knuth's original +\appendtoks + \ifvmode\ctxcommand{resetprevdepth()}\fi % a nasty hack (tested for a while now) +\to \everyafteroutput + %D My own one: \unexpanded\def\spac_helpers_push_interlineskip_yes @@ -1325,7 +1341,7 @@ \unexpanded\def\installsnapvalues#1#2% todo: a proper define {\edef\currentsnapper{#1:#2}% \ifcsname\??gridsnapperattributes\currentsnapper\endcsname \else - \setevalue{\??gridsnapperattributes\currentsnapper}{\ctxlua{builders.vspacing.definesnapmethod("#1","#2")}}% + \setevalue{\??gridsnapperattributes\currentsnapper}{\ctxcommand{definesnapmethod("#1","#2")}}% \fi \setevalue{\??gridsnappers#1}{\attribute\snapmethodattribute\csname\??gridsnapperattributes\currentsnapper\endcsname\space}} @@ -1751,7 +1767,7 @@ % The main spacer: \unexpanded\def\vspacing - {\doifnextoptionalelse\spac_vspacing_yes\spac_vspacing_nop} + {\doifnextoptionalcselse\spac_vspacing_yes\spac_vspacing_nop} \def\spac_vspacing_yes {\ifinpagebody % somewhat weird @@ -1807,7 +1823,7 @@ % these depend on bigskipamount cum suis so we'd better sync them \unexpanded\def\setupvspacing - {\doifnextoptionalelse\setupvspacing_yes\setupvspacing_nop} + {\doifnextoptionalcselse\setupvspacing_yes\setupvspacing_nop} \let\currentvspacing\s!default % hm, default, standard ... @@ -1860,6 +1876,14 @@ \fi\fi \relax} +% \strut \hfill first line \blank[overlay] second line \hfill \strut +% +% \ruledvbox { +% \strut \hfill line 1 \blank[overlay] +% line 2 \hfill \strut \blank[overlay] +% \strut \hfill line 3 \hfill \strut +% } + \definevspacing[\v!preference][penalty:-500] % goodbreak \definevspacing[\v!samepage] [penalty:10000] % nobreak \definevspacing[\v!max] [category:1] @@ -1867,6 +1891,8 @@ \definevspacing[\v!disable] [category:5] \definevspacing[\v!nowhite] [category:6] \definevspacing[\v!back] [category:7] +% together [category:8] +\definevspacing[\v!overlay] [category:9] \definevspacing[\v!always] [category:0] \definevspacing[\v!weak] [order:0] \definevspacing[\v!strong] [order:100] diff --git a/tex/context/base/status-files.pdf b/tex/context/base/status-files.pdf Binary files differindex ae09bb5ae..f7a228bfc 100644 --- a/tex/context/base/status-files.pdf +++ b/tex/context/base/status-files.pdf diff --git a/tex/context/base/status-lua.pdf b/tex/context/base/status-lua.pdf Binary files differindex 8f1f3e5c8..547c0e785 100644 --- a/tex/context/base/status-lua.pdf +++ b/tex/context/base/status-lua.pdf diff --git a/tex/context/base/status-mkiv.lua b/tex/context/base/status-mkiv.lua index 339bc24f6..07e912a88 100644 --- a/tex/context/base/status-mkiv.lua +++ b/tex/context/base/status-mkiv.lua @@ -3207,6 +3207,12 @@ return { }, { category = "lua", + filename = "font-inj", + loading = "font-lib", + status = "okay", + }, + { + category = "lua", filename = "font-ldr", loading = "on demand", status = "okay", @@ -4089,6 +4095,11 @@ return { }, { category = "lua", + filename = "node-ppt", + status = "todo", + }, + { + category = "lua", filename = "node-pro", status = "todo", }, diff --git a/tex/context/base/strc-blk.lua b/tex/context/base/strc-blk.lua index 935b6c061..ce3304d59 100644 --- a/tex/context/base/strc-blk.lua +++ b/tex/context/base/strc-blk.lua @@ -78,7 +78,7 @@ end function blocks.select(state,name,tag,criterium) criterium = criterium or "text" - if find(tag,"=") then tag = "" end + if find(tag,"=",1,true) then tag = "" end local names = settings_to_set(name) local all = tag == "" local tags = not all and settings_to_set(tag) diff --git a/tex/context/base/strc-con.mkvi b/tex/context/base/strc-con.mkvi index 75519b8ce..1862b00a6 100644 --- a/tex/context/base/strc-con.mkvi +++ b/tex/context/base/strc-con.mkvi @@ -980,10 +980,11 @@ } }\relax % \writestatus{constructions}{registering \currentconstruction: \number\scratchcounter}% + \ctxcommand{setinternalreference("\referenceprefix","\currentconstructionreference",\nextinternalreference,"\interactionparameter\c!focus")}% \normalexpanded{% \endgroup \edef\noexpand\currentconstructionlistentry {\the\scratchcounter}% - \edef\noexpand\currentconstructionattribute {\ctxcommand {setinternalreference("\referenceprefix","\currentconstructionreference",\nextinternalreference,"\interactionparameter\c!focus")}}% + \edef\noexpand\currentconstructionattribute {\the\lastdestinationattribute}% \edef\noexpand\currentconstructionsynchronize{\ctxlatecommand{enhancelist(\the\scratchcounter)}}% }% \fi} @@ -993,7 +994,7 @@ % macros. \def\reinstateconstructionnumberentry#1% was xdef - {\edef\currentconstructionattribute {\ctxcommand {getinternalreference(#1)}}% + {\edef\currentconstructionattribute {\ctxcommand {getinternallistreference(#1)}}% \edef\currentconstructionsynchronize{\ctxlatecommand{enhancelist(#1)}}} \installstructurelistprocessor{construction}{\usestructurelistprocessor{number+title}} diff --git a/tex/context/base/strc-des.mkvi b/tex/context/base/strc-des.mkvi index 9c4d3fc6d..fa20d3cae 100644 --- a/tex/context/base/strc-des.mkvi +++ b/tex/context/base/strc-des.mkvi @@ -102,7 +102,7 @@ \unexpanded\def\strc_descriptions_start#1% {\begingroup \strc_constructions_initialize{#1}% - \doifnextoptionalelse\strc_descriptions_start_yes\strc_descriptions_start_nop} + \doifnextoptionalcselse\strc_descriptions_start_yes\strc_descriptions_start_nop} \unexpanded\def\strc_descriptions_start_yes[#1]% {\doifassignmentelse{#1}\strc_descriptions_start_yes_assignment\strc_descriptions_start_yes_reference[#1]} @@ -162,7 +162,7 @@ \unexpanded\def\strc_descriptions_command#1% {\begingroup \strc_constructions_initialize{#1}% - \doifnextoptionalelse\strc_descriptions_yes\strc_descriptions_nop} + \doifnextoptionalcselse\strc_descriptions_yes\strc_descriptions_nop} \unexpanded\def\strc_descriptions_yes {\ifconditional\c_strc_constructions_title_state diff --git a/tex/context/base/strc-doc.lua b/tex/context/base/strc-doc.lua index e3cbb02ed..38830a4e7 100644 --- a/tex/context/base/strc-doc.lua +++ b/tex/context/base/strc-doc.lua @@ -61,6 +61,10 @@ local strippedprocessor = processors.stripped local a_internal = attributes.private('internal') +local ctx_convertnumber = context.convertnumber +local ctx_sprint = context.sprint +local ctx_finalizeauto = context.finalizeautostructurelevel + -- -- -- document -- -- -- local data -- the current state @@ -239,7 +243,7 @@ end local saveset = { } -- experiment, see sections/tricky-001.tex -function sections.somelevel(given) +function sections.setentry(given) -- old number local numbers = data.numbers @@ -456,7 +460,7 @@ function sections.structuredata(depth,key,default,honorcatcodetable) -- todo: sp local data = data.status[depth] local d if data then - if find(key,"%.") then + if find(key,".",1,true) then d = accesstable(key,data) else d = data.titledata @@ -468,7 +472,7 @@ function sections.structuredata(depth,key,default,honorcatcodetable) -- todo: sp local metadata = data.metadata local catcodes = metadata and metadata.catcodes if catcodes then - context.sprint(catcodes,d) + ctx_sprint(catcodes,d) else context(d) end @@ -477,7 +481,7 @@ function sections.structuredata(depth,key,default,honorcatcodetable) -- todo: sp else local catcodes = catcodenumbers[honorcatcodetable] if catcodes then - context.sprint(catcodes,d) + ctx_sprint(catcodes,d) else context(d) end @@ -512,14 +516,20 @@ function sections.current() return data.status[data.depth] end -function sections.depthnumber(n) +local function depthnumber(n) local depth = data.depth if not n or n == 0 then n = depth elseif n < 0 then n = depth + n end - return context(data.numbers[n] or 0) + return data.numbers[n] or 0 +end + +sections.depthnumber = depthnumber + +function commands.depthnumber(n) + return context(depthnumber(n)) end function sections.autodepth(numbers) @@ -580,11 +590,11 @@ local function process(index,numbers,ownnumbers,criterium,separatorset,conversio if ownnumber ~= "" then applyprocessor(ownnumber) elseif conversion and conversion ~= "" then -- traditional (e.g. used in itemgroups) - context.convertnumber(conversion,number) + ctx_convertnumber(conversion,number) else local theconversion = sets.get("structure:conversions",block,conversionset,index,"numbers") local data = startapplyprocessor(theconversion) - context.convertnumber(data or "numbers",number) + ctx_convertnumber(data or "numbers",number) stopapplyprocessor() end end @@ -926,7 +936,7 @@ function commands.autonextstructurelevel(level) else for i=level,#levels do if levels[i] then - context.finalizeautostructurelevel() + ctx_finalizeauto() levels[i] = false end end @@ -937,7 +947,7 @@ end function commands.autofinishstructurelevels() for i=1,#levels do if levels[i] then - context.finalizeautostructurelevel() + ctx_finalizeauto() end end levels = { } @@ -945,8 +955,8 @@ end -- interface (some are actually already commands, like sections.fullnumber) -commands.structurenumber = function() sections.fullnumber() end -commands.structuretitle = function() sections.title () end +commands.structurenumber = sections.fullnumber +commands.structuretitle = sections.title commands.structurevariable = function(name) sections.structuredata(nil,name) end commands.structureuservariable = function(name) sections.userdata (nil,name) end @@ -954,15 +964,23 @@ commands.structurecatcodedget = function(name) sections.structured commands.structuregivencatcodedget = function(name,catcode) sections.structuredata(nil,name,nil,catcode) end commands.structureautocatcodedget = function(name,catcode) sections.structuredata(nil,name,nil,catcode) end -commands.namedstructurevariable = function(depth,name) sections.structuredata(depth,name) end -commands.namedstructureuservariable = function(depth,name) sections.userdata (depth,name) end +commands.namedstructurevariable = sections.structuredata +commands.namedstructureuservariable = sections.userdata --- +commands.setsectionlevel = sections.setlevel +commands.setsectionnumber = sections.setnumber +commands.getsectionnumber = sections.getnumber +commands.getfullsectionnumber = sections.fullnumber +commands.getstructuredata = sections.structuredata +commands.getcurrentsectionlevel = sections.getcurrentlevel -commands.setsectionblock = sections.setblock -commands.pushsectionblock = sections.pushblock -commands.popsectionblock = sections.popblock +commands.setsectionblock = sections.setblock +commands.pushsectionblock = sections.pushblock +commands.popsectionblock = sections.popblock +commands.registersection = sections.register +commands.setsectionentry = sections.setentry +commands.reportstructure = sections.reportstructure -- local byway = "^" .. v_by -- ugly but downward compatible diff --git a/tex/context/base/strc-doc.mkiv b/tex/context/base/strc-doc.mkiv index c8dfae1e4..98abfd611 100644 --- a/tex/context/base/strc-doc.mkiv +++ b/tex/context/base/strc-doc.mkiv @@ -20,7 +20,8 @@ %D This will move: \unexpanded\def\setstructuresynchronization#1% todo: use ctxcontext - {\xdef\currentstructureattribute {\ctxlua {tex.write(structures.references.setinternalreference("\currentstructurereferenceprefix","\currentstructurereference",\nextinternalreference,"\interactionparameter\c!focus"))}}% + {\ctxcommand{setinternalreference("\currentstructurereferenceprefix","\currentstructurereference",\nextinternalreference,"\interactionparameter\c!focus")}% + \xdef\currentstructureattribute {\the\lastdestinationattribute}% \xdef\currentstructuresynchronize{\ctxlatecommand{enhancelist(#1)}}} \protect \endinput diff --git a/tex/context/base/strc-enu.mkvi b/tex/context/base/strc-enu.mkvi index e369bc2e1..0a01d2637 100644 --- a/tex/context/base/strc-enu.mkvi +++ b/tex/context/base/strc-enu.mkvi @@ -370,6 +370,6 @@ \fi} \unexpanded\def\strc_enumerations_skip_number_coupling[#tag]% e.g. for questions with no answer - {\ctxlua{structures.references.setnextorder("construction","#tag")}} + {\ctxcommand{setnextreferenceorder("construction","#tag")}} \protect \endinput diff --git a/tex/context/base/strc-ini.lua b/tex/context/base/strc-ini.lua index 09ed79288..a48679e6f 100644 --- a/tex/context/base/strc-ini.lua +++ b/tex/context/base/strc-ini.lua @@ -38,14 +38,19 @@ local txtcatcodes = catcodenumbers.txtcatcodes local context = context local commands = commands -local pushcatcodes = context.pushcatcodes -local popcatcodes = context.popcatcodes - local trace_processors = false local report_processors = logs.reporter("processors","structure") trackers.register("typesetters.processors", function(v) trace_processors = v end) +local xmlconvert = lxml.convert +local xmlstore = lxml.store + +local ctx_pushcatcodes = context.pushcatcodes +local ctx_popcatcodes = context.popcatcodes +local ctx_xmlsetup = context.xmlsetup +local ctx_xmlprocessbuffer = context.xmlprocessbuffer + -- -- -- namespace -- -- -- -- This is tricky: we have stored and initialized already some of @@ -151,11 +156,17 @@ local function simplify(d,nodefault) for k, v in next, d do local tv = type(v) if tv == "table" then - if next(v) then t[k] = simplify(v) end + if next(v) then + t[k] = simplify(v) + end elseif tv == "string" then - if v ~= "" and v ~= "default" then t[k] = v end + if v ~= "" and v ~= "default" then + t[k] = v + end elseif tv == "boolean" then - if v then t[k] = v end + if v then + t[k] = v + end else t[k] = v end @@ -168,6 +179,34 @@ local function simplify(d,nodefault) end end +-- we only care about the tuc file so this would do too: +-- +-- local function simplify(d,nodefault) +-- if d then +-- for k, v in next, d do +-- local tv = type(v) +-- if tv == "string" then +-- if v == "" or v == "default" then +-- d[k] = nil +-- end +-- elseif tv == "table" then +-- if next(v) then +-- simplify(v) +-- end +-- elseif tv == "boolean" then +-- if not v then +-- d[k] = nil +-- end +-- end +-- end +-- return d +-- elseif nodefault then +-- return nil +-- else +-- return { } +-- end +-- end + helpers.simplify = simplify function helpers.merged(...) @@ -211,19 +250,19 @@ function helpers.title(title,metadata) -- coding is xml is rather old and not th report_processors("putting xml data in buffer: %s",xmldata) report_processors("processing buffer with setup %a and tag %a",xmlsetup,tag) end - if experiment then - -- the question is: will this be forgotten ... better store in a via file - local xmltable = lxml.convert("temp",xmldata or "") - lxml.store("temp",xmltable) - context.xmlsetup("temp",xmlsetup or "") - else - context.xmlprocessbuffer("dummy",tag,xmlsetup or "") - end + if experiment then + -- the question is: will this be forgotten ... better store in a via file + local xmltable = xmlconvert("temp",xmldata or "") + xmlstore("temp",xmltable) + ctx_xmlsetup("temp",xmlsetup or "") + else + ctx_xmlprocessbuffer("dummy",tag,xmlsetup or "") + end elseif xmlsetup then -- title is reference to node (so \xmlraw should have been used) if trace_processors then report_processors("feeding xmlsetup %a using node %a",xmlsetup,title) end - context.xmlsetup(title,metadata.xmlsetup) + ctx_xmlsetup(title,metadata.xmlsetup) else local catcodes = metadata.catcodes if catcodes == notcatcodes or catcodes == xmlcatcodes then @@ -241,9 +280,9 @@ function helpers.title(title,metadata) -- coding is xml is rather old and not th -- doesn't work when a newline is in there \section{Test\ A} so we do -- it this way: -- - pushcatcodes(catcodes) + ctx_pushcatcodes(catcodes) context(title) - popcatcodes() + ctx_popcatcodes() end end else diff --git a/tex/context/base/strc-itm.mkvi b/tex/context/base/strc-itm.mkvi index 8259fa38d..098b863b9 100644 --- a/tex/context/base/strc-itm.mkvi +++ b/tex/context/base/strc-itm.mkvi @@ -331,7 +331,7 @@ \def\strc_itemgroups_store_continue_state#options#settings% {\setxvalue{\??itemgroupoption \currentitemgroup}{\strc_itemgroups_process_options{#options}}% - \setgvalue{\??itemgroupsetting\currentitemgroup}{\setupcurrentitemgroup [#settings]}} + \setgvalue{\??itemgroupsetting\currentitemgroup}{\setupcurrentitemgroup[#settings]}} \def\strc_itemgroups_fetch_continue_state {\getvalue{\??itemgroupoption \currentitemgroup}% @@ -1009,8 +1009,21 @@ \strc_itemgroups_between_command \fi} -\unexpanded\def\strc_itemgroups_start_item[#1]% we can reuse more - {\def\currentitemreference{#1}% +% c_strc_itemgroups_concat: +% +% the problem is that we use leftskip so concat cannot reliable take the height into +% account; it's .. rather tricky when white space in there anyway (due to \par) .. so +% we rely on a special blank method +% +% \startitemize[n] +% \item bla +% \item \startitemize[a] +% \item bla $\displaystyle\int^{x^{y^4}}$ \item bla +% \stopitemize +% \stopitemize + +\unexpanded\def\strc_itemgroups_start_item[#reference]% we can reuse more + {\def\currentitemreference{#reference}% \ifconditional\c_strc_itemgroups_text % begin of item \else @@ -1026,10 +1039,12 @@ \strc_itemgroups_start_item_next \fi \ifconditional\c_strc_itemgroups_concat - % \vskip-\dimexpr\lastskip+\lineheight\relax - \vskip-\lastskip % we cannot use a \dimexpr here because - \vskip-\lineheight % then we loose the stretch and shrink - \nobreak + % \vskip-\lastskip % we cannot use a \dimexpr here because + % \vskip-\lineheight % then we loose the stretch and shrink + % \nobreak + % + \blank[\v!overlay]% new per 2014-03-27 + % \setfalse\c_strc_itemgroups_concat \fi \dostarttagged\t!item\empty diff --git a/tex/context/base/strc-lab.mkiv b/tex/context/base/strc-lab.mkiv index ce4cdcc5e..3e6617126 100644 --- a/tex/context/base/strc-lab.mkiv +++ b/tex/context/base/strc-lab.mkiv @@ -58,10 +58,15 @@ {\normalexpanded{\defineconstruction[#1][#3][\s!handler=\v!label,\c!level=#2]}% \setevalue{\??label#1:\s!parent}{\??label#3}}% \ifconditional\c_strc_constructions_define_commands - \setuevalue{\e!next #1}{\strc_labels_next {#1}{\number#2}}% obsolete - \setuevalue{\c!reset#1}{\strc_labels_reset {#1}{\number#2}}% obsolete - %setuevalue{\c!set #1}{\strc_labels_set {#1}{\number#2}}% obsolete - \setuevalue {#1}{\strc_labels_command{#1}}% + \setuevalue{\e!next #1}{\strc_labels_next {#1}{\number#2}}% obsolete + \setuevalue{\v!reset #1}{\strc_labels_reset {#1}{\number#2}}% obsolete % should be \e!reset anyway + %setuevalue{\c!set #1}{\strc_labels_set {#1}{\number#2}}% obsolete + \ifcsname\v!current#1\endcsname + % we play safe + \else + \setuevalue{\v!current#1}{\strc_labels_current{#1}}% % obsolete % should be \e!current anyway + \fi + \setuevalue {#1}{\strc_labels_command{#1}}% \fi} % todo: \strc_labels_command for user @@ -103,6 +108,8 @@ \let\p_strc_constructions_title \empty \let\p_strc_constructions_number\empty +\newconditional\c_strc_constructions_number_keep + \setvalue{\??constructioninitializer\v!label}% {\let\currentlabel \currentconstruction \let\constructionparameter \labelparameter @@ -117,7 +124,9 @@ \iftrialtypesetting \strc_counters_save\currentconstructionnumber \fi - \strc_counters_increment_sub\currentconstructionnumber\currentconstructionlevel + \ifconditional\c_strc_constructions_number_keep \else + \strc_counters_increment_sub\currentconstructionnumber\currentconstructionlevel + \fi \else \setfalse\c_strc_constructions_number_state \fi @@ -137,11 +146,12 @@ %D Interfaces: -\let\strc_labels_command\strc_descriptions_command +\unexpanded\def\strc_labels_command{\setfalse\c_strc_constructions_number_keep\strc_descriptions_command} +\unexpanded\def\strc_labels_current{\settrue \c_strc_constructions_number_keep\strc_descriptions_command} -\unexpanded\def\strc_labels_next {\strc_constructions_next_indeed \namedlabelparameter} % #1#2 -\unexpanded\def\strc_labels_reset{\strc_constructions_reset_indeed\namedlabelparameter} % #1#2 -%unexpanded\def\strc_labels_set {\strc_constructions_set_indeed \namedlabelparameter} % #1#2 +\unexpanded\def\strc_labels_next {\strc_constructions_next_indeed \namedlabelparameter} % #1#2 +\unexpanded\def\strc_labels_reset {\strc_constructions_reset_indeed\namedlabelparameter} % #1#2 +%unexpanded\def\strc_labels_set {\strc_constructions_set_indeed \namedlabelparameter} % #1#2 % similar to enumerations diff --git a/tex/context/base/strc-lst.lua b/tex/context/base/strc-lst.lua index d86368b6a..16160e273 100644 --- a/tex/context/base/strc-lst.lua +++ b/tex/context/base/strc-lst.lua @@ -16,7 +16,7 @@ if not modules then modules = { } end modules ['strc-lst'] = { -- move more to commands local format, gmatch, gsub = string.format, string.gmatch, string.gsub -local tonumber = tonumber +local tonumber, type = tonumber, type local concat, insert, remove = table.concat, table.insert, table.remove local lpegmatch = lpeg.match local simple_hash_to_string, settings_to_hash = utilities.parsers.simple_hash_to_string, utilities.parsers.settings_to_hash @@ -49,7 +49,7 @@ lists.collected = collected lists.tobesaved = tobesaved lists.enhancers = lists.enhancers or { } -lists.internals = allocate(lists.internals or { }) -- to be checked +-----.internals = allocate(lists.internals or { }) -- to be checked lists.ordered = allocate(lists.ordered or { }) -- to be checked lists.cached = cached lists.pushed = pushed @@ -88,6 +88,7 @@ local function initializer() local collected = lists.collected local internals = checked(references.internals) local ordered = lists.ordered + local usedinternals = references.usedinternals local blockdone = { } for i=1,#collected do local c = collected[i] @@ -99,6 +100,7 @@ local function initializer() local internal = r.internal if internal then internals[internal] = c + usedinternals[internal] = r.used end local block = r.block if block and not blockdone[block] then @@ -128,7 +130,22 @@ local function initializer() end end -job.register('structures.lists.collected', tobesaved, initializer) +local function finalizer() + local flaginternals = references.flaginternals + local usedviews = references.usedviews + for i=1,#tobesaved do + local r = tobesaved[i].references + if r then + local i = r.internal + local f = flaginternals[i] + if f then + r.used = usedviews[i] or true + end + end + end +end + +job.register('structures.lists.collected', tobesaved, initializer, finalizer) local groupindices = table.setmetatableindex("table") @@ -139,11 +156,11 @@ end -- we could use t (as hash key) in order to check for dup entries -function lists.addto(t) +function lists.addto(t) -- maybe more more here (saves parsing at the tex end) local m = t.metadata local u = t.userdata if u and type(u) == "string" then - t.userdata = helpers.touserdata(u) -- nicer at the tex end + t.userdata = helpers.touserdata(u) end local numberdata = t.numberdata local group = numberdata and numberdata.group @@ -158,6 +175,10 @@ function lists.addto(t) numberdata.numbers = cached[groupindex].numberdata.numbers end end + local setcomponent = references.setcomponent + if setcomponent then + setcomponent(t) -- can be inlined + end local r = t.references local i = r and r.internal or 0 -- brrr local p = pushed[i] @@ -167,10 +188,6 @@ function lists.addto(t) pushed[i] = p r.listindex = p end - local setcomponent = references.setcomponent - if setcomponent then - setcomponent(t) -- might move to the tex end - end if group then groupindices[name][group] = p end diff --git a/tex/context/base/strc-lst.mkvi b/tex/context/base/strc-lst.mkvi index f78881221..0008f0602 100644 --- a/tex/context/base/strc-lst.mkvi +++ b/tex/context/base/strc-lst.mkvi @@ -147,8 +147,8 @@ \ifx\p_location\v!here % this branch injects nodes ! \expanded{\ctxlatecommand{enhancelist(\currentlistnumber)}}% - \ctxlua{structures.references.setinternalreference(nil,nil,\nextinternalreference)}% will change - \xdef\currentstructurelistattribute{\number\lastdestinationattribute}% + \ctxcommand{setinternalreference(nil,nil,\nextinternalreference)}% will change + \xdef\currentstructurelistattribute{\the\lastdestinationattribute}% \dontleavehmode\hbox attr \destinationattribute \lastdestinationattribute{}% todo \else % and this one doesn't @@ -1050,7 +1050,7 @@ \listparameter\c!numbercommand\currentlistsymbol \listparameter\c!right \endgroup - \kern.5em + \kern.5\emwidth\relax \nobreak \fi \fi @@ -1069,7 +1069,7 @@ \ifconditional\c_lists_has_page \ifconditional\c_lists_show_page \nobreak - \hskip.75em\relax + \hskip.75\emwidth\relax \nobreak \strc_lists_set_reference_attribute\v!pagenumber \strc_lists_set_style_color\c!pagestyle\c!pagecolor\v!pagenumber diff --git a/tex/context/base/strc-mar.lua b/tex/context/base/strc-mar.lua index 258787d0a..9c6259de4 100644 --- a/tex/context/base/strc-mar.lua +++ b/tex/context/base/strc-mar.lua @@ -712,6 +712,9 @@ end -- interface +commands.markingtitle = marks.title +commands.markingnumber = marks.number + commands.definemarking = marks.define commands.relatemarking = marks.relate commands.setmarking = marks.set diff --git a/tex/context/base/strc-not.mkvi b/tex/context/base/strc-not.mkvi index a1aecf83a..60ab66c98 100644 --- a/tex/context/base/strc-not.mkvi +++ b/tex/context/base/strc-not.mkvi @@ -231,7 +231,7 @@ \ifnotesenabled \strc_counters_increment_sub\currentconstructionnumber\currentconstructionlevel \fi - \doifnextoptionalelse\strc_notations_command_yes\strc_notations_command_nop} + \doifnextoptionalcselse\strc_notations_command_yes\strc_notations_command_nop} \unexpanded\def\strc_notations_command_nop#title% {\strc_constructions_register[\c!label={\descriptionparameter\c!text},\c!reference=,\c!title={#title},\c!bookmark=,\c!list=][]% @@ -265,7 +265,7 @@ % \normalexpanded % not that efficient but also not that frequently used (\normaldef for parser) % {\normaldef\noexpand\strc_pickup_yes[##1]##2\csname\e!stop#stoptag\endcsname{\strc_notations_command_yes[##1]{##2}}% % \normaldef\noexpand\strc_pickup_nop ##1\csname\e!stop#stoptag\endcsname{\strc_notations_command_nop {##1}}}% -% \doifnextoptionalelse\strc_pickup_yes\strc_pickup_nop} +% \doifnextoptionalcselse\strc_pickup_yes\strc_pickup_nop} \unexpanded\def\strc_notations_start#tag#stoptag% {\begingroup @@ -278,7 +278,7 @@ \normalexpanded % not that efficient but also not that frequently used (\normaldef for parser) {\def\noexpand\strc_pickup_yes[#one]#two\csname\e!stop#stoptag\endcsname{\strc_notations_command_yes[#one]{#two}}% \def\noexpand\strc_pickup_nop #one\csname\e!stop#stoptag\endcsname{\strc_notations_command_nop {#one}}}% - \doifnextoptionalelse\strc_pickup_yes\strc_pickup_nop} + \doifnextoptionalcselse\strc_pickup_yes\strc_pickup_nop} \unexpanded\def\strc_notations_start_yes[#reference]#title% {\strc_constructions_register[\c!label={\descriptionparameter\c!text},\c!reference={#reference},\c!title={#title},\c!bookmark=,\c!list=][]% @@ -460,7 +460,11 @@ \else\ifconditional\inlocalnotes % todo: per note class \global\settrue\postponednote \else +\ifconditional\c_strc_notes_delayed + % probably end notes +\else \handlenoteinsert\currentnote\currentnotenumber % either an insert or just delayed +\fi \fi\fi \endgroup \fi @@ -756,7 +760,9 @@ %appendtoks \notesenabledfalse \to \everymarking \appendtoks \notesenabledfalse \to \everybeforepagebody -\appendtoks \notesenabledfalse \to \everystructurelist % quick hack +\appendtoks \notesenabledfalse \to \everystructurelist % quick hack +\appendtoks \notesenabledfalse \to \everysimplifycommands % quick hack +\appendtoks \notesenabledfalse \to \everypreroll % quick hack %D Often we need to process the whole set of notes and to make that %D fast, we use a token register: @@ -1242,6 +1248,7 @@ \appendtoks \doif{\noteparameter\c!scope}\v!page{\floatingpenalty\maxdimen}% experiment \penalty\currentnotepenalty + %\interlinepenalty\maxdimen % todo \forgetall \strc_notes_set_bodyfont \redoconvertfont % to undo \undo calls in in headings etc @@ -1298,6 +1305,7 @@ \strc_notes_set_bodyfont \setbox\scratchbox\hbox {\strc_notes_flush_inserts}% + \page_postprocessors_linenumbers_deepbox\scratchbox \setbox\scratchbox\hbox {\setupcurrentnote [\c!location=, @@ -1778,19 +1786,19 @@ {\dodoubleempty\strc_notes_symbol} \def\strc_notes_symbol[#tag][#reference]% - {\dontleavehmode - \begingroup - \edef\currentnote{#tag}% - \usenotestyleandcolor\c!textstyle\c!textcolor - \ifnotesenabled + {\ifnotesenabled + \dontleavehmode + \begingroup + \edef\currentnote{#tag}% + \usenotestyleandcolor\c!textstyle\c!textcolor \ifsecondargument \unskip \noteparameter\c!textcommand{\in[#reference]}% command here? \else \noteparameter\c!textcommand\lastnotesymbol % check if command double \fi - \fi - \endgroup} + \endgroup + \fi} \unexpanded\def\note {\dodoubleempty\strc_notes_note} diff --git a/tex/context/base/strc-num.lua b/tex/context/base/strc-num.lua index 67e9b1734..e1fc60030 100644 --- a/tex/context/base/strc-num.lua +++ b/tex/context/base/strc-num.lua @@ -404,7 +404,7 @@ function counters.restart(name,n,newstart,noreset) if newstart then local d = allocate(name,n) d.start = newstart - if not noreset then + if not noreset then -- why / when needed ? reset(name,n) -- hm end end @@ -589,8 +589,13 @@ function commands.doifnotcounter (name) commands.doifnot (counterdata[name]) end function commands.incrementedcounter(...) context(counters.add(...)) end +-- the noreset is somewhat messy ... always false messes up e.g. itemize but true the pagenumbers +-- +-- if this fails i'll clean up this still somewhat experimental mechanism (but i need use cases) + function commands.checkcountersetup(name,level,start,state) - counters.restart(name,1,start,true) -- no reset + local noreset = true -- level > 0 -- was true + counters.restart(name,1,start,noreset) -- was true counters.setstate(name,state) counters.setlevel(name,level) sections.setchecker(name,level,counters.reset) diff --git a/tex/context/base/strc-num.mkiv b/tex/context/base/strc-num.mkiv index 2fa8b0e9a..6802027e6 100644 --- a/tex/context/base/strc-num.mkiv +++ b/tex/context/base/strc-num.mkiv @@ -17,6 +17,8 @@ \unprotect +\startcontextdefinitioncode + % work in progress % to be checked: can we use the command handler code here? % all settings will move to lua @@ -63,6 +65,11 @@ \appendtoks \ifx\currentcounter\empty \else + \edef\p_number{\counterparameter\c!number}% + \ifx\p_number\empty \else + \ctxcommand{setcounter("\counterparameter\s!name",1,\number\p_number)}% + \letcounterparameter\c!number\empty + \fi \edef\p_start{\counterparameter\c!start}% \setexpandedcounterparameter\c!start{\ifx\p_start\empty0\else\number\p_start\fi}% \strc_counters_check_setup @@ -351,7 +358,7 @@ {\begingroup \edef\currentcounter{#1}% \ifsecondargument\setupcurrentcounter[#2]\fi - \ctxlua{structures.sections.prefixedconverted( + \ctxcommand{prefixedconverted( "\counterparameter\s!name", { prefix = "\counterparameter\c!prefix", @@ -379,7 +386,7 @@ \endgroup} \def\directconvertedcounter#1#2% name, type - {\ctxlua{structures.sections.prefixedconverted( + {\ctxcommand{prefixedconverted( "\namedcounterparameter{#1}\s!name", { prefix = "\namedcounterparameter{#1}\c!prefix", @@ -480,6 +487,7 @@ % currentstructurecomponent => \strc_current_ or just \m_strc_ + \unexpanded\def\strc_counters_register_component#1#2#3#4#5#6#7[#8][#9]% maybe also nolist {\begingroup % @@ -504,119 +512,153 @@ \fi % \ifx\p_hascaption\v!yes - \xdef\currentstructurecomponentname {#3\s!name}% - \xdef\currentstructurecomponentlevel {#3\c!level}% - \edef\currentstructurecomponentexpansion {#3\c!expansion}% - \xdef\currentstructurecomponentxmlsetup {#3\c!xmlsetup}% - \xdef\currentstructurecomponentcatcodes {#3\s!catcodes}% - \xdef\currentstructurecomponentlabel {#3\c!label}% - \xdef\currentstructurecomponentreference {#3\c!reference}% - \xdef\currentstructurecomponentreferenceprefix{#3\c!referenceprefix}% - \ifx\currentstructurecomponentexpansion\s!xml - \xmlstartraw - \xdef\currentstructurecomponenttitle {#3\c!title}% - \xdef\currentstructurecomponentbookmark{#3\c!bookmark}% - \xdef\currentstructurecomponentmarking {#3\c!marking}% - \xdef\currentstructurecomponentlist {#3\c!list}% - \xmlstopraw - \ifx\currentstructurecomponentlist\empty - \globallet\currentstructurecomponentlist\currentstructurecomponenttitle - \fi - \globallet\currentstructurecomponentcoding\s!xml + \strc_counters_register_component_list{#1}{#3}{#4}{#9}% + \else\ifx\currentstructurecomponentreference\empty + \strc_counters_register_component_none + \else + \strc_counters_register_component_page{#3}% + \fi\fi + \endgroup} + +\def\strc_counters_register_component_none + {\glet\m_strc_counters_last_registered_index \relax + \glet\m_strc_counters_last_registered_attribute \attributeunsetvalue + \glet\m_strc_counters_last_registered_synchronize\relax} + +\def\strc_counters_register_component_page#1% + {\xdef\currentstructurecomponentreference {#1\c!reference}% + \xdef\currentstructurecomponentreferenceprefix{#1\c!referenceprefix}% + % maybe have a helper in strc-ref.mkvi + \setnextinternalreference + \ctxcommand{setreferenceattribute(% can be helper with less passed + "\s!page", + "\currentstructurecomponentreferenceprefix", + "\currentstructurecomponentreference", + { + references = { + internal = \nextinternalreference, + block = "\currentsectionblock", + section = structures.sections.currentid(), + }, + metadata = { + kind = "page", + }, + }, + "\interactionparameter\c!focus") + }% + \xdef\m_strc_counters_last_registered_attribute {\the\lastdestinationattribute}% + \glet\m_strc_counters_last_registered_index \relax + \glet\m_strc_counters_last_registered_synchronize\relax} + +\def\strc_counters_register_component_list#1#2#3#4% + {\xdef\currentstructurecomponentname {#2\s!name}% + \xdef\currentstructurecomponentlevel {#2\c!level}% + \edef\currentstructurecomponentexpansion {#2\c!expansion}% + \xdef\currentstructurecomponentxmlsetup {#2\c!xmlsetup}% + \xdef\currentstructurecomponentcatcodes {#2\s!catcodes}% + \xdef\currentstructurecomponentlabel {#2\c!label}% + \xdef\currentstructurecomponentreference {#2\c!reference}% + \xdef\currentstructurecomponentreferenceprefix{#2\c!referenceprefix}% + \ifx\currentstructurecomponentexpansion\s!xml + \xmlstartraw + \xdef\currentstructurecomponenttitle {#2\c!title}% + \xdef\currentstructurecomponentbookmark{#2\c!bookmark}% + \xdef\currentstructurecomponentmarking {#2\c!marking}% + \xdef\currentstructurecomponentlist {#2\c!list}% + \xmlstopraw + \ifx\currentstructurecomponentlist\empty + \globallet\currentstructurecomponentlist\currentstructurecomponenttitle + \fi + \globallet\currentstructurecomponentcoding\s!xml + \else + \ifx\currentstructurecomponentexpansion\v!yes + \xdef\currentstructurecomponenttitle {#2\c!title}% + \xdef\currentstructurecomponentbookmark{#2\c!bookmark}% + \xdef\currentstructurecomponentmarking {#2\c!marking}% + \xdef\currentstructurecomponentlist {#2\c!list}% \else - \ifx\currentstructurecomponentexpansion\v!yes - \xdef\currentstructurecomponenttitle {#3\c!title}% - \xdef\currentstructurecomponentbookmark{#3\c!bookmark}% - \xdef\currentstructurecomponentmarking {#3\c!marking}% - \xdef\currentstructurecomponentlist {#3\c!list}% - \else - \xdef\currentstructurecomponenttitle {#4\c!title}% - \xdef\currentstructurecomponentbookmark{#4\c!bookmark}% - \xdef\currentstructurecomponentmarking {#4\c!marking}% - \xdef\currentstructurecomponentlist {#4\c!list}% - \iflocation \ifx\currentstructurecomponentbookmark\empty - \begingroup - \simplifycommands - \xdef\currentstructurecomponentbookmark{\detokenize\expandafter{\normalexpanded{#3\c!title}}}% - \endgroup - \fi \fi - \fi - \ifx\currentstructurecomponentlist\empty - \globallet\currentstructurecomponentlist\currentstructurecomponenttitle - \fi - \globallet\currentstructurecomponentcoding\s!tex + \xdef\currentstructurecomponenttitle {#3\c!title}% + \xdef\currentstructurecomponentbookmark{#3\c!bookmark}% + \xdef\currentstructurecomponentmarking {#3\c!marking}% + \xdef\currentstructurecomponentlist {#3\c!list}% + \iflocation \ifx\currentstructurecomponentbookmark\empty + \begingroup + \simplifycommands + \xdef\currentstructurecomponentbookmark{\detokenize\expandafter{\normalexpanded{#2\c!title}}}% + \endgroup + \fi \fi \fi - % - \setnextinternalreference - \xdef\m_strc_counters_last_registered_index{\ctxcommand{addtolist{ - metadata = { - kind = "#1", - name = "\currentname", - level = structures.sections.currentlevel(), - catcodes = \the\ifx\currentstructurecomponentcatcodes\empty\catcodetable\else\csname\currentstructurecomponentcatcodes\endcsname\fi, - coding = "\currentstructurecomponentcoding", - \ifx\currentstructurecomponentcoding\s!xml - xmlroot = "\xmldocument", - \fi - \ifx\currentstructurecomponentxmlsetup\empty \else - xmlsetup = "\currentstructurexmlsetup", - \fi - }, - references = { - internal = \nextinternalreference, - block = "\currentsectionblock", - reference = "\currentstructurecomponentreference", - referenceprefix = "\currentstructurecomponentreferenceprefix", - section = structures.sections.currentid(), - }, - titledata = { - label = \!!bs\detokenize\expandafter{\currentstructurecomponentlabel }\!!es, - title = \!!bs\detokenize\expandafter{\currentstructurecomponenttitle }\!!es, - \ifx\currentstructurecomponentbookmark\currentstructurecomponenttitle \else - bookmark = \!!bs\detokenize\expandafter{\currentstructurecomponentbookmark }\!!es, - \fi - \ifx\currentstructurecomponentmarking\currentstructurecomponenttitle \else - marking = \!!bs\detokenize\expandafter{\currentstructurecomponentmarking }\!!es, - \fi - \ifx\currentstructurecomponentlist\currentstructurecomponenttitle \else - list = \!!bs\detokenize\expandafter{\currentstructurecomponentlist}\!!es, - \fi - }, + \ifx\currentstructurecomponentlist\empty + \globallet\currentstructurecomponentlist\currentstructurecomponenttitle + \fi + \globallet\currentstructurecomponentcoding\s!tex + \fi + % + \setnextinternalreference + \xdef\m_strc_counters_last_registered_index{\ctxcommand{addtolist{ + metadata = { + kind = "#1", + name = "\currentname", + level = structures.sections.currentlevel(), + catcodes = \the\ifx\currentstructurecomponentcatcodes\empty\catcodetable\else\csname\currentstructurecomponentcatcodes\endcsname\fi, + coding = "\currentstructurecomponentcoding", + \ifx\currentstructurecomponentcoding\s!xml + xmlroot = "\xmldocument", + \fi + \ifx\currentstructurecomponentxmlsetup\empty \else + xmlsetup = "\currentstructurexmlsetup", + \fi + }, + references = { + internal = \nextinternalreference, + block = "\currentsectionblock", + reference = "\currentstructurecomponentreference", + referenceprefix = "\currentstructurecomponentreferenceprefix", + section = structures.sections.currentid(), + }, + titledata = { + label = \!!bs\detokenize\expandafter{\currentstructurecomponentlabel }\!!es, + title = \!!bs\detokenize\expandafter{\currentstructurecomponenttitle }\!!es, + \ifx\currentstructurecomponentbookmark\currentstructurecomponenttitle \else + bookmark = \!!bs\detokenize\expandafter{\currentstructurecomponentbookmark}\!!es, + \fi + \ifx\currentstructurecomponentmarking\currentstructurecomponenttitle \else + marking = \!!bs\detokenize\expandafter{\currentstructurecomponentmarking }\!!es, + \fi + \ifx\currentstructurecomponentlist\currentstructurecomponenttitle \else + list = \!!bs\detokenize\expandafter{\currentstructurecomponentlist}\!!es, + \fi + }, \ifx\p_hasnumber\v!yes - prefixdata = { - prefix = "#3\c!prefix", - separatorset = "#3\c!prefixseparatorset", - conversion = \!!bs#3\c!prefixconversion\!!es, - conversionset = "#3\c!prefixconversionset", - set = "#3\c!prefixset", - % segments = "#3\c!prefixsegments", - segments = "\p_prefixsegments", - connector = \!!bs#3\c!prefixconnector\!!es, - }, - numberdata = { % more helpers here, like compact elsewhere - numbers = structures.counters.compact("\currentcounter",nil,true), - group = "#3\c!group", - groupsuffix = \!!bs#3\c!groupsuffix\!!es, - counter = "\currentcounter", - separatorset = "#3\c!numberseparatorset", - conversion = \!!bs#3\c!numberconversion\!!es, - conversionset = "#3\c!numberconversionset", - starter = \!!bs#3\c!numberstarter\!!es, - stopper = \!!bs#3\c!numberstopper\!!es, - segments = "#3\c!numbersegments", - }, + prefixdata = { + prefix = "#2\c!prefix", + separatorset = "#2\c!prefixseparatorset", + conversion = \!!bs#2\c!prefixconversion\!!es, + conversionset = "#2\c!prefixconversionset", + set = "#2\c!prefixset", + % segments = "#2\c!prefixsegments", + segments = "\p_prefixsegments", + connector = \!!bs#2\c!prefixconnector\!!es, + }, + numberdata = { % more helpers here, like compact elsewhere + numbers = structures.counters.compact("\currentcounter",nil,true), + group = "#2\c!group", + groupsuffix = \!!bs#2\c!groupsuffix\!!es, + counter = "\currentcounter", + separatorset = "#2\c!numberseparatorset", + conversion = \!!bs#2\c!numberconversion\!!es, + conversionset = "#2\c!numberconversionset", + starter = \!!bs#2\c!numberstarter\!!es, + stopper = \!!bs#2\c!numberstopper\!!es, + segments = "#2\c!numbersegments", + }, \fi - userdata = \!!bs\detokenize{#9}\!!es % will be converted to table at the lua end - } - }}% - \xdef\m_strc_counters_last_registered_attribute {\ctxcommand {setinternalreference(nil,nil,\nextinternalreference)}}% - \xdef\m_strc_counters_last_registered_synchronize{\ctxlatecommand{enhancelist(\m_strc_counters_last_registered_index)}}% - \else - \glet\m_strc_counters_last_registered_index \relax - \glet\m_strc_counters_last_registered_attribute \attributeunsetvalue - \glet\m_strc_counters_last_registered_synchronize\relax - \fi - \endgroup} + userdata = \!!bs\detokenize{#4}\!!es % will be converted to table at the lua end + } + }}% + \ctxcommand{setinternalreference(nil,nil,\nextinternalreference)}% + \xdef\m_strc_counters_last_registered_attribute {\the\lastdestinationattribute}% + \xdef\m_strc_counters_last_registered_synchronize{\ctxlatecommand{enhancelist(\m_strc_counters_last_registered_index)}}} \let\m_strc_counters_last_registered_index \relax \let\m_strc_counters_last_registered_attribute \relax @@ -764,4 +806,6 @@ % \fi % \to \everysetupcounter +\stopcontextdefinitioncode + \protect \endinput diff --git a/tex/context/base/strc-pag.lua b/tex/context/base/strc-pag.lua index fd0a367aa..c294a4645 100644 --- a/tex/context/base/strc-pag.lua +++ b/tex/context/base/strc-pag.lua @@ -34,6 +34,8 @@ local stopapplyprocessor = processors.stopapply local texsetcount = tex.setcount local texgetcount = tex.getcount +local ctx_convertnumber = context.convertnumber + -- storage local collected, tobesaved = allocate(), allocate() @@ -101,11 +103,11 @@ function counters.specials.userpage() end end -local f_convert = string.formatters["\\convertnumber{%s}{%s}"] - -local function convertnumber(str,n) - return f_convert(str or "numbers",n) -end +-- local f_convert = string.formatters["\\convertnumber{%s}{%s}"] +-- +-- local function convertnumber(str,n) +-- return f_convert(str or "numbers",n) +-- end function pages.number(realdata,pagespec) local userpage, block = realdata.number, realdata.block or "" -- sections.currentblock() @@ -118,12 +120,12 @@ function pages.number(realdata,pagespec) applyprocessor(starter) end if conversion ~= "" then - context.convertnumber(conversion,userpage) + ctx_convertnumber(conversion,userpage) else if conversionset == "" then conversionset = "default" end local theconversion = sets.get("structure:conversions",block,conversionset,1,"numbers") -- to be checked: 1 local data = startapplyprocessor(theconversion) - context.convertnumber(data or "number",userpage) + ctx_convertnumber(data or "number",userpage) stopapplyprocessor() end if stopper ~= "" then @@ -318,3 +320,8 @@ function sections.prefixedconverted(name,prefixspec,numberspec) counters.converted(name,numberspec) end end + +-- + +commands.savepagedata = pages.save +commands.prefixedconverted = sections.prefixedconverted -- weird place diff --git a/tex/context/base/strc-pag.mkiv b/tex/context/base/strc-pag.mkiv index c4e9819ba..6eddc0fba 100644 --- a/tex/context/base/strc-pag.mkiv +++ b/tex/context/base/strc-pag.mkiv @@ -17,6 +17,8 @@ \unprotect +\startcontextdefinitioncode + % Allocation: \countdef\realpageno \zerocount \realpageno \plusone @@ -109,7 +111,7 @@ % invisible = \def\strc_pagenumbers_page_state_save % \normalexpanded? - {\ctxlua{structures.pages.save({ + {\ctxcommand{savepagedata({ prefix = "\namedcounterparameter\s!userpage\c!prefix", separatorset = "\namedcounterparameter\s!userpage\c!prefixseparatorset", conversion = "\namedcounterparameter\s!userpage\c!prefixconversion", @@ -462,4 +464,6 @@ \initializepagecounters +\stopcontextdefinitioncode + \protect \endinput diff --git a/tex/context/base/strc-ref.lua b/tex/context/base/strc-ref.lua index 938af1ad7..0c8bb6e53 100644 --- a/tex/context/base/strc-ref.lua +++ b/tex/context/base/strc-ref.lua @@ -16,7 +16,7 @@ if not modules then modules = { } end modules ['strc-ref'] = { local format, find, gmatch, match, strip = string.format, string.find, string.gmatch, string.match, string.strip local floor = math.floor -local rawget, tonumber = rawget, tonumber +local rawget, tonumber, type = rawget, tonumber, type local lpegmatch = lpeg.match local insert, remove, copytable = table.insert, table.remove, table.copy local formatters = string.formatters @@ -44,7 +44,13 @@ local report_importing = logs.reporter("references","importing") local report_empty = logs.reporter("references","empty") local variables = interfaces.variables -local constants = interfaces.constants +local v_default = variables.default +local v_url = variables.url +local v_file = variables.file +local v_unknown = variables.unknown +local v_page = variables.page +local v_auto = variables.auto + local context = context local commands = commands @@ -52,11 +58,6 @@ local texgetcount = tex.getcount local texsetcount = tex.setcount local texconditionals = tex.conditionals -local v_default = variables.default -local v_url = variables.url -local v_file = variables.file -local v_unknown = variables.unknown -local v_yes = variables.yes local productcomponent = resolvers.jobs.productcomponent local justacomponent = resolvers.jobs.justacomponent @@ -91,6 +92,9 @@ local tobesaved = allocate() local collected = allocate() local tobereferred = allocate() local referred = allocate() +local usedinternals = allocate() +local flaginternals = allocate() +local usedviews = allocate() references.derived = derived references.specials = specials @@ -103,6 +107,9 @@ references.tobesaved = tobesaved references.collected = collected references.tobereferred = tobereferred references.referred = referred +references.usedinternals = usedinternals +references.flaginternals = flaginternals +references.usedviews = usedviews local splitreference = references.splitreference local splitprefix = references.splitcomponent -- replaces: references.splitprefix @@ -111,6 +118,22 @@ local componentsplitter = references.componentsplitter local currentreference = nil +local txtcatcodes = catcodes.numbers.txtcatcodes -- or just use "txtcatcodes" +local context_delayed = context.delayed + +local ctx_pushcatcodes = context.pushcatcodes +local ctx_popcatcodes = context.popcatcodes +local ctx_dofinishsomereference = context.dofinishsomereference +local ctx_dofromurldescription = context.dofromurldescription +local ctx_dofromurlliteral = context.dofromurlliteral +local ctx_dofromfiledescription = context.dofromfiledescription +local ctx_dofromfileliteral = context.dofromfileliteral +local ctx_expandreferenceoperation = context.expandreferenceoperation +local ctx_expandreferencearguments = context.expandreferencearguments +local ctx_getreferencestructureprefix = context.getreferencestructureprefix +local ctx_convertnumber = context.convertnumber +local ctx_emptyreference = context.emptyreference + storage.register("structures/references/defined", references.defined, "structures.references.defined") local initializers = { } @@ -119,6 +142,7 @@ local finalizers = { } function references.registerinitializer(func) -- we could use a token register instead initializers[#initializers+1] = func end + function references.registerfinalizer(func) -- we could use a token register instead finalizers[#finalizers+1] = func end @@ -129,12 +153,32 @@ local function initializer() -- can we use a tobesaved as metatable for collecte for i=1,#initializers do initializers[i](tobesaved,collected) end + for prefix, list in next, collected do + for tag, data in next, list do + local r = data.references + local i = r.internal + if i then + internals[i] = c + usedinternals[i] = r.used + end + end + end end local function finalizer() for i=1,#finalizers do finalizers[i](tobesaved) end + for prefix, list in next, tobesaved do + for tag, data in next, list do + local r = data.references + local i = r.internal + local f = flaginternals[i] + if f then + r.used = usedviews[i] or true + end + end + end end job.register('structures.references.collected', tobesaved, initializer, finalizer) @@ -148,6 +192,38 @@ local function initializer() -- can we use a tobesaved as metatable for collecte nofreferred = #referred end +-- no longer fone this way + +-- references.resolvers = references.resolvers or { } +-- local resolvers = references.resolvers +-- +-- function resolvers.section(var) +-- local vi = lists.collected[var.i[2]] +-- if vi then +-- var.i = vi +-- var.r = (vi.references and vi.references.realpage) or (vi.pagedata and vi.pagedata.realpage) or 1 +-- else +-- var.i = nil +-- var.r = 1 +-- end +-- end +-- +-- resolvers.float = resolvers.section +-- resolvers.description = resolvers.section +-- resolvers.formula = resolvers.section +-- resolvers.note = resolvers.section +-- +-- function resolvers.reference(var) +-- local vi = var.i[2] +-- if vi then +-- var.i = vi +-- var.r = (vi.references and vi.references.realpage) or (vi.pagedata and vi.pagedata.realpage) or 1 +-- else +-- var.i = nil +-- var.r = 1 +-- end +-- end + -- We make the array sparse (maybe a finalizer should optionally return a table) because -- there can be quite some page links involved. We only store one action number per page -- which is normally good enough for what we want (e.g. see above/below) and we do @@ -215,8 +291,6 @@ local function referredpage(n) return texgetcount("realpageno") end --- setmetatableindex(referred,function(t,k) return referredpage(k) end ) - references.referredpage = referredpage function references.registerpage(n) -- called in the backend code @@ -246,16 +320,15 @@ local function setnextorder(kind,name) texsetcount("global","locationorder",lastorder) end -references.setnextorder = setnextorder -function references.setnextinternal(kind,name) +local function setnextinternal(kind,name) setnextorder(kind,name) -- always incremented with internal local n = texgetcount("locationcount") + 1 texsetcount("global","locationcount",n) return n end -function references.currentorder(kind,name) +local function currentorder(kind,name) return orders[kind] and orders[kind][name] or lastorder end @@ -266,20 +339,27 @@ local function setcomponent(data) local references = data and data.references if references then references.component = component + if references.referenceprefix == component then + references.referenceprefix = nil + end end return component end -- but for the moment we do it here (experiment) end -commands.setnextinternalreference = references.setnextinternal +references.setnextorder = setnextorder +references.setnextinternal = setnextinternal +references.currentorder = currentorder +references.setcomponent = setcomponent + +commands.setnextreferenceorder = setnextorder +commands.setnextinternalreference = setnextinternal function commands.currentreferenceorder(kind,name) - context(references.currentorder(kind,name)) + context(currentorder(kind,name)) end -references.setcomponent = setcomponent - function references.set(kind,prefix,tag,data) -- setcomponent(data) local pd = tobesaved[prefix] -- nicer is a metatable @@ -288,21 +368,6 @@ function references.set(kind,prefix,tag,data) tobesaved[prefix] = pd end local n = 0 - -- for ref in gmatch(tag,"[^,]+") do - -- if ref ~= "" then - -- if check_duplicates and pd[ref] then - -- if prefix and prefix ~= "" then - -- report_references("redundant reference %a in namespace %a",ref,prefix) - -- else - -- report_references("redundant reference %a",ref) - -- end - -- else - -- n = n + 1 - -- pd[ref] = data - -- context.dofinishsomereference(kind,prefix,ref) - -- end - -- end - -- end local function action(ref) if ref == "" then -- skip @@ -315,7 +380,7 @@ function references.set(kind,prefix,tag,data) else n = n + 1 pd[ref] = data - context.dofinishsomereference(kind,prefix,ref) + ctx_dofinishsomereference(kind,prefix,ref) end end process_settings(tag,action) @@ -333,127 +398,85 @@ commands.enhancereference = references.enhance -- -- -- related to strc-ini.lua -- -- -- -references.resolvers = references.resolvers or { } -local resolvers = references.resolvers - -local function getfromlist(var) - local vi = var.i - if vi then - vi = vi[3] or lists.collected[vi[2]] - if vi then - local r = vi.references and vi.references - if r then - r = r.realpage - end - if not r then - r = vi.pagedata and vi.pagedata - if r then - r = r.realpage - end - end - var.i = vi - var.r = r or 1 - else - var.i = nil - var.r = 1 - end - else - var.i = nil - var.r = 1 - end -end - --- resolvers.section = getfromlist --- resolvers.float = getfromlist --- resolvers.description = getfromlist --- resolvers.formula = getfromlist --- resolvers.note = getfromlist - -setmetatableindex(resolvers,function(t,k) - local v = getfromlist - resolvers[k] = v - return v -end) - -function resolvers.reference(var) - local vi = var.i[2] -- check - if vi then - var.i = vi - var.r = (vi.references and vi.references.realpage) or (vi.pagedata and vi.pagedata.realpage) or 1 - else - var.i = nil - var.r = 1 - end -end +-- no metatable here .. better be sparse local function register_from_lists(collected,derived,pages,sections) - local g = derived[""] if not g then g = { } derived[""] = g end -- global + local derived_g = derived[""] -- global + if not derived_g then + derived_g = { } + derived[""] = derived_g + end for i=1,#collected do - local entry = collected[i] - local m, r = entry.metadata, entry.references - if m and r then - local reference = r.reference or "" - local prefix = r.referenceprefix or "" - local component = r.component and r.component or "" - if reference ~= "" then - local kind, realpage = m.kind, r.realpage - if kind and realpage then - local d = derived[prefix] - if not d then - d = { } - derived[prefix] = d - end - local c = derived[component] - if not c then - c = { } - derived[component] = c - end - local t = { kind, i, entry } - -- for s in gmatch(reference,"%s*([^,]+)") do - -- if trace_referencing then - -- report_references("list entry %a provides %a reference %a on realpage %a",i,kind,s,realpage) - -- end - -- c[s] = c[s] or t -- share them - -- d[s] = d[s] or t -- share them - -- g[s] = g[s] or t -- first wins - -- end - local function action(s) - if trace_referencing then - report_references("list entry %a provides %a reference %a on realpage %a",i,kind,s,realpage) + local entry = collected[i] + local metadata = entry.metadata + if metadata then + local kind = metadata.kind + if kind then + local references = entry.references + if references then + local reference = references.reference + if reference and reference ~= "" then + local realpage = references.realpage + if realpage then + local prefix = references.referenceprefix + local component = references.component + local derived_p = nil + local derived_c = nil + if prefix and prefix ~= "" then + derived_p = derived[prefix] + if not derived_p then + derived_p = { } + derived[prefix] = derived_p + end + end + if component and component ~= "" and component ~= prefix then + derived_c = derived[component] + if not derived_c then + derived_c = { } + derived[component] = derived_c + end + end + local function action(s) + if trace_referencing then + report_references("list entry %a provides %a reference %a on realpage %a",i,kind,s,realpage) + end + if derived_p and not derived_p[s] then + derived_p[s] = entry + end + if derived_c and not derived_c[s] then + derived_c[s] = entry + end + if not derived_g[s] then + derived_g[s] = entry -- first wins + end + end + process_settings(reference,action) end - c[s] = c[s] or t -- share them - d[s] = d[s] or t -- share them - g[s] = g[s] or t -- first wins end - process_settings(reference,action) end end end end --- inspect(derived) + -- inspect(derived) end references.registerinitializer(function() register_from_lists(lists.collected,derived) end) -- urls -references.urls = references.urls or { } -references.urls.data = references.urls.data or { } +local urls = references.urls or { } +references.urls = urls +local urldata = urls.data or { } +urls.data = urldata -local urls = references.urls.data - -function references.urls.define(name,url,file,description) +function urls.define(name,url,file,description) if name and name ~= "" then - urls[name] = { url or "", file or "", description or url or file or ""} + urldata[name] = { url or "", file or "", description or url or file or ""} end end -local pushcatcodes = context.pushcatcodes -local popcatcodes = context.popcatcodes -local txtcatcodes = catcodes.numbers.txtcatcodes -- or just use "txtcatcodes" - -function references.urls.get(name) - local u = urls[name] +function urls.get(name) + local u = urldata[name] if u then local url, file = u[1], u[2] if file and file ~= "" then @@ -465,58 +488,58 @@ function references.urls.get(name) end function commands.geturl(name) - local url = references.urls.get(name) + local url = urls.get(name) if url and url ~= "" then - pushcatcodes(txtcatcodes) + ctx_pushcatcodes(txtcatcodes) context(url) - popcatcodes() + ctx_popcatcodes() end end -- function commands.gethyphenatedurl(name,...) --- local url = references.urls.get(name) +-- local url = urls.get(name) -- if url and url ~= "" then -- hyphenatedurl(url,...) -- end -- end function commands.doifurldefinedelse(name) - commands.doifelse(urls[name]) + commands.doifelse(urldata[name]) end -commands.useurl= references.urls.define +commands.useurl= urls.define -- files -references.files = references.files or { } -references.files.data = references.files.data or { } - -local files = references.files.data +local files = references.files or { } +references.files = files +local filedata = files.data or { } +files.data = filedata -function references.files.define(name,file,description) +function files.define(name,file,description) if name and name ~= "" then - files[name] = { file or "", description or file or "" } + filedata[name] = { file or "", description or file or "" } end end -function references.files.get(name,method,space) -- method: none, before, after, both, space: yes/no - local f = files[name] +function files.get(name,method,space) -- method: none, before, after, both, space: yes/no + local f = filedata[name] if f then context(f[1]) end end function commands.doiffiledefinedelse(name) - commands.doifelse(files[name]) + commands.doifelse(filedata[name]) end -commands.usefile= references.files.define +commands.usefile= files.define -- helpers function references.checkedfile(whatever) -- return whatever if not resolved if whatever then - local w = files[whatever] + local w = filedata[whatever] if w then return w[1] else @@ -527,7 +550,7 @@ end function references.checkedurl(whatever) -- return whatever if not resolved if whatever then - local w = urls[whatever] + local w = urldata[whatever] if w then local u, f = w[1], w[2] if f and f ~= "" then @@ -543,11 +566,11 @@ end function references.checkedfileorurl(whatever,default) -- return nil, nil if not resolved if whatever then - local w = files[whatever] + local w = filedata[whatever] if w then return w[1], nil else - local w = urls[whatever] + local w = urldata[whatever] if w then local u, f = w[1], w[2] if f and f ~= "" then @@ -563,25 +586,25 @@ end -- programs -references.programs = references.programs or { } -references.programs.data = references.programs.data or { } +local programs = references.programs or { } +references.programs = programs +local programdata = programs.data or { } +programs.data = programdata -local programs = references.programs.data - -function references.programs.define(name,file,description) +function programs.define(name,file,description) if name and name ~= "" then - programs[name] = { file or "", description or file or ""} + programdata[name] = { file or "", description or file or ""} end end -function references.programs.get(name) - local f = programs[name] +function programs.get(name) + local f = programdata[name] return f and f[1] end function references.checkedprogram(whatever) -- return whatever if not resolved if whatever then - local w = programs[whatever] + local w = programdata[whatever] if w then return w[1] else @@ -590,10 +613,10 @@ function references.checkedprogram(whatever) -- return whatever if not resolved end end -commands.defineprogram = references.programs.define +commands.defineprogram = programs.define function commands.getprogram(name) - local f = programs[name] + local f = programdata[name] if f then context(f[1]) end @@ -602,11 +625,11 @@ end -- shared by urls and files function references.whatfrom(name) - context((urls[name] and v_url) or (files[name] and v_file) or v_unknown) + context((urldata[name] and v_url) or (filedata[name] and v_file) or v_unknown) end function references.from(name) - local u = urls[name] + local u = urldata[name] if u then local url, file, description = u[1], u[2], u[3] if description ~= "" then @@ -618,7 +641,7 @@ function references.from(name) return url end else - local f = files[name] + local f = filedata[name] if f then local file, description = f[1], f[2] if description ~= "" then @@ -631,25 +654,25 @@ function references.from(name) end function commands.from(name) - local u = urls[name] + local u = urldata[name] if u then local url, file, description = u[1], u[2], u[3] if description ~= "" then - context.dofromurldescription(description) + ctx_dofromurldescription(description) -- ok elseif file and file ~= "" then - context.dofromurlliteral(url .. "/" .. file) + ctx_dofromurlliteral(url .. "/" .. file) else - context.dofromurlliteral(url) + ctx_dofromurlliteral(url) end else - local f = files[name] + local f = filedata[name] if f then local file, description = f[1], f[2] if description ~= "" then - context.dofromfiledescription(description) + ctx_dofromfiledescription(description) else - context.dofromfileliteral(file) + ctx_dofromfileliteral(file) end end end @@ -657,7 +680,7 @@ end function references.define(prefix,reference,list) local d = defined[prefix] if not d then d = { } defined[prefix] = d end - d[reference] = { "defined", list } + d[reference] = list end function references.reset(prefix,reference) @@ -678,33 +701,92 @@ commands.resetreference = references.reset -- to what extend do we check the non prefixed variant -local strict = false +-- local strict = false +-- +-- local function resolve(prefix,reference,args,set) -- we start with prefix,reference +-- if reference and reference ~= "" then +-- if not set then +-- set = { prefix = prefix, reference = reference } +-- else +-- if not set.reference then set.reference = reference end +-- if not set.prefix then set.prefix = prefix end +-- end +-- local r = settings_to_array(reference) +-- for i=1,#r do +-- local ri = r[i] +-- local d +-- if strict then +-- d = defined[prefix] or defined[""] +-- d = d and d[ri] +-- else +-- d = defined[prefix] +-- d = d and d[ri] +-- if not d then +-- d = defined[""] +-- d = d and d[ri] +-- end +-- end +-- if d then +-- resolve(prefix,d,nil,set) +-- else +-- local var = splitreference(ri) +-- if var then +-- var.reference = ri +-- local vo, vi = var.outer, var.inner +-- if not vo and vi then +-- -- to be checked +-- if strict then +-- d = defined[prefix] or defined[""] +-- d = d and d[vi] +-- else +-- d = defined[prefix] +-- d = d and d[vi] +-- if not d then +-- d = defined[""] +-- d = d and d[vi] +-- end +-- end +-- -- +-- if d then +-- resolve(prefix,d,var.arguments,set) -- args can be nil +-- else +-- if args then var.arguments = args end +-- set[#set+1] = var +-- end +-- else +-- if args then var.arguments = args end +-- set[#set+1] = var +-- end +-- if var.has_tex then +-- set.has_tex = true +-- end +-- else +-- -- report_references("funny pattern %a",ri) +-- end +-- end +-- end +-- return set +-- else +-- return { } +-- end +-- end + +setmetatableindex(defined,"table") local function resolve(prefix,reference,args,set) -- we start with prefix,reference if reference and reference ~= "" then if not set then set = { prefix = prefix, reference = reference } else - set.reference = set.reference or reference - set.prefix = set.prefix or prefix + if not set.reference then set.reference = reference end + if not set.prefix then set.prefix = prefix end end local r = settings_to_array(reference) for i=1,#r do local ri = r[i] - local d - if strict then - d = defined[prefix] or defined[""] - d = d and d[ri] - else - d = defined[prefix] - d = d and d[ri] - if not d then - d = defined[""] - d = d and d[ri] - end - end + local d = defined[prefix][ri] or defined[""][ri] if d then - resolve(prefix,d[2],nil,set) + resolve(prefix,d,nil,set) else local var = splitreference(ri) if var then @@ -712,20 +794,10 @@ local function resolve(prefix,reference,args,set) -- we start with prefix,refere local vo, vi = var.outer, var.inner if not vo and vi then -- to be checked - if strict then - d = defined[prefix] or defined[""] - d = d and d[vi] - else - d = defined[prefix] - d = d and d[vi] - if not d then - d = defined[""] - d = d and d[vi] - end - end + d = defined[prefix][vi] or defined[""][vi] -- if d then - resolve(prefix,d[2],var.arguments,set) -- args can be nil + resolve(prefix,d,var.arguments,set) -- args can be nil else if args then var.arguments = args end set[#set+1] = var @@ -760,21 +832,18 @@ function commands.setreferencearguments(k,v) references.currentset[k].arguments = v end -local expandreferenceoperation = context.expandreferenceoperation -local expandreferencearguments = context.expandreferencearguments - function references.expandcurrent() -- todo: two booleans: o_has_tex& a_has_tex local currentset = references.currentset if currentset and currentset.has_tex then for i=1,#currentset do local ci = currentset[i] local operation = ci.operation - if operation and find(operation,"\\") then -- if o_has_tex then - expandreferenceoperation(i,operation) + if operation and find(operation,"\\",1,true) then -- if o_has_tex then + ctx_expandreferenceoperation(i,operation) end local arguments = ci.arguments - if arguments and find(arguments,"\\") then -- if a_has_tex then - expandreferencearguments(i,arguments) + if arguments and find(arguments,"\\",1,true) then -- if a_has_tex then + ctx_expandreferencearguments(i,arguments) end end end @@ -856,8 +925,8 @@ end local externalfiles = { } -table.setmetatableindex(externalfiles, function(t,k) - local v = files[k] +setmetatableindex(externalfiles, function(t,k) + local v = filedata[k] if not v then v = { k, k } end @@ -865,7 +934,7 @@ table.setmetatableindex(externalfiles, function(t,k) return v end) -table.setmetatableindex(externals,function(t,k) -- either or not automatically +setmetatableindex(externals, function(t,k) -- either or not automatically local filename = externalfiles[k][1] -- filename local fullname = file.replacesuffix(filename,"tuc") if lfs.isfile(fullname) then -- todo: use other locator @@ -952,22 +1021,6 @@ local function loadproductreferences(productname,componentname,utilitydata) ptarget = { } productreferences[prefix] = ptarget end - -- for s in gmatch(reference,"%s*([^,]+)") do - -- if ptarget then - -- if trace_importing then - -- report_importing("registering %s reference, kind %a, name %a, prefix %a, reference %a", - -- "product",kind,productname,prefix,s) - -- end - -- ptarget[s] = ptarget[s] or entry - -- end - -- if ctarget then - -- if trace_importing then - -- report_importing("registering %s reference, kind %a, name %a, prefix %a, referenc %a", - -- "component",kind,productname,prefix,s) - -- end - -- ctarget[s] = ctarget[s] or entry - -- end - -- end local function action(s) if ptarget then if trace_importing then @@ -1062,7 +1115,7 @@ references.registerinitializer(function(tobesaved,collected) productdata.components = componentlist(job.structure.collected) or { } end) -function structures.references.loadpresets(product,component) -- we can consider a special components hash +function references.loadpresets(product,component) -- we can consider a special components hash if product and component and product~= "" and component ~= "" and not productdata.product then -- maybe: productdata.filename ~= filename productdata.product = product productdata.component = component @@ -1082,7 +1135,7 @@ function structures.references.loadpresets(product,component) -- we can consider end end -structures.references.productdata = productdata +references.productdata = productdata local useproduct = commands.useproduct @@ -1096,7 +1149,7 @@ if useproduct then if trace_referencing or trace_importing then report_references("loading presets for component %a of product %a",component,product) end - structures.references.loadpresets(product,component) + references.loadpresets(product,component) end end end @@ -1194,7 +1247,7 @@ local function identify_arguments(set,var,i) local s = specials[var.inner] if s then -- inner{argument} - var.kind = "special with arguments" + var.kind = "special operation with arguments" else var.error = "unknown inner or special" end @@ -1204,114 +1257,105 @@ local function identify_arguments(set,var,i) return var end -local function identify_inner(set,var,prefix,collected,derived,tobesaved) +-- needs checking: if we don't do too much (redundant) checking now +-- inner ... we could move the prefix logic into the parser so that we have 'm for each entry +-- foo:bar -> foo == prefix (first we try the global one) +-- -:bar -> ignore prefix + +local function finish_inner(var,p,i) + var.kind = "inner" + var.i = i + var.p = p + var.r = (i.references and i.references.realpage) or (i.pagedata and i.pagedata.realpage) or 1 + return var +end + +local function identify_inner(set,var,prefix,collected,derived) local inner = var.inner - local outer = var.outer - -- inner ... we could move the prefix logic into the parser so that we have 'm for each entry - -- foo:bar -> foo == prefix (first we try the global one) - -- -:bar -> ignore prefix - local p, i = prefix, nil - local splitprefix, splitinner -- the next test is a safeguard when references are auto loaded from outer - if inner then - splitprefix, splitinner = lpegmatch(prefixsplitter,inner) + if not inner or inner == "" then + return false end - -- these are taken from other anonymous references + local splitprefix, splitinner = lpegmatch(prefixsplitter,inner) if splitprefix and splitinner then + -- we check for a prefix:reference instance in the regular set of collected + -- references; a special case is -: which forces a lookup in the global list if splitprefix == "-" then - i = collected[""] - i = i and i[splitinner] - if i then - p = "" - end - else - i = collected[splitprefix] - i = i and i[splitinner] + local i = collected[""] if i then - p = splitprefix + i = i[splitinner] + if i then + return finish_inner(var,"",i) + end end end - end - -- todo: strict here - if not i then - i = collected[prefix] - i = i and i[inner] - if i then - p = prefix - end - end - if not i and prefix ~= "" then - i = collected[""] - i = i and i[inner] + local i = collected[splitprefix] if i then - p = "" + i = i[splitinner] + if i then + return finish_inner(var,splitprefix,i) + end end - end - if i then - var.i = { "reference", i } - resolvers.reference(var) - var.kind = "inner" - var.p = p - elseif derived then - -- these are taken from other data structures (like lists) - if splitprefix and splitinner then + if derived then + -- next we look for a reference in the regular set of collected references + -- using the prefix that is active at this moment (so we overload the given + -- these are taken from other data structures (like lists) if splitprefix == "-" then - i = derived[""] - i = i and i[splitinner] + local i = derived[""] if i then - p = "" + i = i[splitinner] + if i then + return finish_inner(var,"",i) + end end - else - i = derived[splitprefix] - i = i and i[splitinner] + end + local i = derived[splitprefix] + if i then + i = i[splitinner] if i then - p = splitprefix + return finish_inner(var,splitprefix,i) end end end - if not i then - i = derived[prefix] - i = i and i[inner] - if i then - p = prefix - end + end + -- we now ignore the split prefix and treat the whole inner as a potential + -- referenice into the global list + local i = collected[prefix] + if i then + i = i[inner] + if i then + return finish_inner(var,prefix,i) end - if not i and prefix ~= "" then - i = derived[""] - i = i and i[inner] + end + if not i and derived then + -- and if not found we look in the derived references + local i = derived[prefix] + if i then + i = i[inner] if i then - p = "" + return finish_inner(var,prefix,i) end end + end + return false +end + +local function unprefixed_inner(set,var,prefix,collected,derived,tobesaved) + local inner = var.inner + local s = specials[inner] + if s then + var.kind = "special" + else + local i = (collected and collected[""] and collected[""][inner]) or + (derived and derived [""] and derived [""][inner]) or + (tobesaved and tobesaved[""] and tobesaved[""][inner]) if i then var.kind = "inner" - var.i = i - var.p = p - local ri = resolvers[i[1]] - if ri then - ri(var) - else - -- can't happen as we catch it with a metatable now - report_references("unknown inner resolver for %a",i[1]) - end + var.p = "" + var.i = i + var.r = (i.references and i.references.realpage) or (i.pagedata and i.pagedata.realpage) or 1 else - -- no prefixes here - local s = specials[inner] - if s then - var.kind = "special" - else - i = (collected and collected[""] and collected[""][inner]) or - (derived and derived [""] and derived [""][inner]) or - (tobesaved and tobesaved[""] and tobesaved[""][inner]) - if i then - var.kind = "inner" - var.i = { "reference", i } - resolvers.reference(var) - var.p = "" - else - var.error = "unknown inner or special" - end - end + var.error = "unknown inner or special" end end return var @@ -1322,9 +1366,8 @@ local function identify_outer(set,var,i) local inner = var.inner local external = externals[outer] if external then - local v = copytable(var) - v = identify_inner(set,v,nil,external) - if v.i and not v.error then + local v = identify_inner(set,var,nil,external) + if v then v.kind = "outer with inner" set.external = true if trace_identifying then @@ -1332,9 +1375,8 @@ local function identify_outer(set,var,i) end return v end - v = copytable(var) - local v = identify_inner(set,v,v.outer,external) - if v.i and not v.error then + local v = identify_inner(set,var,var.outer,external) + if v then v.kind = "outer with inner" set.external = true if trace_identifying then @@ -1345,8 +1387,8 @@ local function identify_outer(set,var,i) end local external = productdata.componentreferences[outer] if external then - local v = identify_inner(set,copytable(var),nil,external) - if v.i and not v.error then + local v = identify_inner(set,var,nil,external) + if v then v.kind = "outer with inner" set.external = true if trace_identifying then @@ -1373,6 +1415,8 @@ local function identify_outer(set,var,i) local arguments = var.arguments local operation = var.operation if inner then + -- tricky: in this case we can only use views when we're sure that all inners + -- are flushed in the outer document so that should become an option if arguments then -- outer::inner{argument} var.kind = "outer with inner with arguments" @@ -1380,9 +1424,9 @@ local function identify_outer(set,var,i) -- outer::inner var.kind = "outer with inner" end - var.i = { "reference", inner } - resolvers.reference(var) + var.i = inner var.f = outer + var.r = (inner.references and inner.references.realpage) or (inner.pagedata and inner.pagedata.realpage) or 1 if trace_identifying then report_identify_outer(set,var,i,"2e") end @@ -1419,57 +1463,62 @@ local function identify_outer(set,var,i) return var end +-- todo: avoid copy + local function identify_inner_or_outer(set,var,i) -- here we fall back on product data local inner = var.inner if inner and inner ~= "" then - local v = identify_inner(set,copytable(var),set.prefix,collected,derived,tobesaved) - if v.i and not v.error then - v.kind = "inner" -- check this + + -- first we look up in collected and derived using the current prefix + + local prefix = set.prefix + + local v = identify_inner(set,var,set.prefix,collected,derived) + if v then if trace_identifying then report_identify_outer(set,v,i,"4a") end return v end - -- these get auto prefixes but are loaded in the document so they are - -- internal .. we also set the realpage (for samepage analysis) + -- nest we look at each component (but we can omit the already consulted one local components = job.structure.components if components then - for i=1,#components do - local component = components[i] - local data = collected[component] - local vi = data and data[inner] - if vi then --- var = copytable(var) --- var.kind = "inner" --- var.i = vi --- var.p = component --- runners.inner(var.r = vi.references.realpage --- if trace_identifying then --- report_identify_outer(set,var,i,"4x") --- end --- return var -local v = identify_inner(set,copytable(var),component,collected) -- is copy needed ? -if v.i and not v.error then - v.kind = "inner" - if trace_identifying then - report_identify_outer(set,var,i,"4x") - end - return v -end + for c=1,#components do + local component = components[c] + if component ~= prefix then + local v = identify_inner(set,var,component,collected,derived) + if v then + if trace_identifying then + report_identify_outer(set,var,i,"4b") + end + return v + end end end end + -- as a last resort we will consult the global lists + + local v = unprefixed_inner(set,var,"",collected,derived,tobesaved) + if v then + if trace_identifying then + report_identify_outer(set,v,i,"4c") + end + return v + end + + -- not it gets bad ... we need to look in external files ... keep in mind that + -- we can best use explicit references for this ... we might issue a warning + local componentreferences = productdata.componentreferences local productreferences = productdata.productreferences local components = productdata.components if components and componentreferences then - -- for component, data in next, productdata.componentreferences do -- better do this in order of processing: - for i=1,#components do - local component = components[i] + for c=1,#components do + local component = components[c] local data = componentreferences[component] if data then local d = data[""] @@ -1480,7 +1529,7 @@ end var.kind = "outer with inner" set.external = true if trace_identifying then - report_identify_outer(set,var,i,"4b") + report_identify_outer(set,var,i,"4d") end return var end @@ -1500,7 +1549,7 @@ end var.kind = "outer with inner" set.external = true if trace_identifying then - report_identify_outer(set,var,i,"4c") + report_identify_outer(set,var,i,"4e") end return var end @@ -1515,7 +1564,7 @@ end var.kind = "outer with inner" set.external = true if trace_identifying then - report_identify_outer(set,var,i,"4d") + report_identify_outer(set,var,i,"4f") end return var end @@ -1526,30 +1575,18 @@ end var.error = "no inner" end if trace_identifying then - report_identify_outer(set,var,i,"4e") + report_identify_outer(set,var,i,"4g") end return var end --- local function identify_inner_or_outer(set,var,i) --- -- we might consider first checking with a prefix prepended and then without --- -- which is better for fig:oeps --- local var = do_identify_inner_or_outer(set,var,i) --- if var.error then --- local prefix = set.prefix --- if prefix and prefix ~= "" then --- var.inner = prefix .. ':' .. var.inner --- var.error = nil --- return do_identify_inner_or_outer(set,var,i) --- end --- end --- return var --- end - local function identify_inner_component(set,var,i) -- we're in a product (maybe ignore when same as component) local component = var.component - identify_inner(set,var,component,collected,derived,tobesaved) + local v = identify_inner(set,var,component,collected,derived) + if not v then + var.error = "unknown inner in component" + end if trace_identifying then report_identify_outer(set,var,i,"5a") end @@ -1685,53 +1722,51 @@ end luatex.registerstopactions(references.reportproblems) -local innermethod = "names" +-- The auto method will try to avoid named internals in a clever way which +-- can make files smaller without sacrificing external references. Some of +-- the housekeeping happens the backend side. + +local innermethod = v_auto -- only page|auto now +local defaultinnermethod = defaultinnermethod +references.innermethod = innermethod -- don't mess with this one directly function references.setinnermethod(m) - if m then - if m == "page" or m == "mixed" or m == "names" then - innermethod = m - elseif m == true or m == v_yes then - innermethod = "page" - end + if toboolean(m) or m == v_page then + innermethod = v_page + else + innermethod = v_auto end + references.innermethod = innermethod function references.setinnermethod() report_references("inner method is already set and frozen to %a",innermethod) end end function references.getinnermethod() - return innermethod or "names" + return innermethod or defaultinnermethod end -directives.register("references.linkmethod", function(v) -- page mixed names +directives.register("references.linkmethod", function(v) -- page auto references.setinnermethod(v) end) -- this is inconsistent -function references.setinternalreference(prefix,tag,internal,view) -- needs checking - if innermethod == "page" then - return unsetvalue - else +local destinationattributes = { } + +local function setinternalreference(prefix,tag,internal,view) -- needs checking + local destination = unsetvalue + if innermethod == v_auto then local t, tn = { }, 0 -- maybe add to current if tag then if prefix and prefix ~= "" then prefix = prefix .. ":" -- watch out, : here - -- for ref in gmatch(tag,"[^,]+") do - -- tn = tn + 1 - -- t[tn] = prefix .. ref - -- end local function action(ref) tn = tn + 1 t[tn] = prefix .. ref end process_settings(tag,action) else - -- for ref in gmatch(tag,"[^,]+") do - -- tn = tn + 1 - -- t[tn] = ref - -- end local function action(ref) tn = tn + 1 t[tn] = ref @@ -1739,36 +1774,48 @@ function references.setinternalreference(prefix,tag,internal,view) -- needs chec process_settings(tag,action) end end - if internal and innermethod == "names" then -- mixed or page + -- ugly .. later we decide to ignore it when we have a real one + -- but for testing we might want to see them all + if internal then tn = tn + 1 - t[tn] = "aut:" .. internal + t[tn] = internal -- when number it's internal end - local destination = references.mark(t,nil,nil,view) -- returns an attribute - texsetcount("lastdestinationattribute",destination) - return destination + destination = references.mark(t,nil,nil,view) -- returns an attribute + end + if internal then -- new + destinationattributes[internal] = destination end + texsetcount("lastdestinationattribute",destination) + return destination end +local function getinternalreference(internal) + return destinationattributes[internal] or 0 +end + +references.setinternalreference = setinternalreference +references.getinternalreference = getinternalreference +commands.setinternalreference = setinternalreference +commands.getinternalreference = getinternalreference + function references.setandgetattribute(kind,prefix,tag,data,view) -- maybe do internal automatically here - local attr = references.set(kind,prefix,tag,data) and references.setinternalreference(prefix,tag,nil,view) or unsetvalue + local attr = references.set(kind,prefix,tag,data) and setinternalreference(prefix,tag,nil,view) or unsetvalue texsetcount("lastdestinationattribute",attr) return attr end commands.setreferenceattribute = references.setandgetattribute -function references.getinternalreference(n) -- n points into list (todo: registers) +function references.getinternallistreference(n) -- n points into list (todo: registers) local l = lists.collected[n] - return l and l.references.internal or n -end - -function commands.setinternalreference(prefix,tag,internal,view) -- needs checking - context(references.setinternalreference(prefix,tag,internal,view)) + local i = l and l.references.internal + return i and destinationattributes[i] or 0 end -function commands.getinternalreference(n) -- this will also be a texcount +function commands.getinternallistreference(n) -- this will also be a texcount local l = lists.collected[n] - context(l and l.references.internal or n) + local i = l and l.references.internal + context(i and destinationattributes[i] or 0) end -- @@ -1800,10 +1847,22 @@ end references.getcurrentprefixspec = getcurrentprefixspec function commands.getcurrentprefixspec(default) - context.getreferencestructureprefix(getcurrentprefixspec(default)) + ctx_getreferencestructureprefix(getcurrentprefixspec(default)) end -function references.filter(name,...) -- number page title ... +local genericfilters = { } +local userfilters = { } +local textfilters = { } +local fullfilters = { } +local sectionfilters = { } + +filters.generic = genericfilters +filters.user = userfilters +filters.text = textfilters +filters.full = fullfilters +filters.section = sectionfilters + +local function filterreference(name,...) -- number page title ... local data = currentreference and currentreference.i -- maybe we should take realpage from here if data then if name == "realpage" then @@ -1812,8 +1871,8 @@ function references.filter(name,...) -- number page title ... else -- assumes data is table local kind = type(data) == "table" and data.metadata and data.metadata.kind if kind then - local filter = filters[kind] or filters.generic - filter = filter and (filter[name] or filter.unknown or filters.generic[name] or filters.generic.unknown) + local filter = filters[kind] or genericfilters + filter = filter and (filter[name] or filter.unknown or genericfilters[name] or genericfilters.unknown) if filter then if trace_referencing then report_references("name %a, kind %a, using dedicated filter",name,kind) @@ -1833,18 +1892,24 @@ function references.filter(name,...) -- number page title ... end end -function references.filterdefault() - return references.filter("default",getcurrentprefixspec(v_default)) +local function filterreferencedefault() + return filterreference("default",getcurrentprefixspec(v_default)) end +references.filter = filterreference +references.filterdefault = filterreferencedefault + +commands.filterreference = filterreference +commands.filterdefaultreference = filterreferencedefault + function commands.currentreferencedefault(tag) - if not tag then tag = "default" end - references.filter(tag,context.delayed(getcurrentprefixspec(tag))) + if not tag then + tag = "default" + end + filterreference(tag,context_delayed(getcurrentprefixspec(tag))) end -filters.generic = { } - -function filters.generic.title(data) +function genericfilters.title(data) if data then local titledata = data.titledata or data.useddata if titledata then @@ -1853,7 +1918,7 @@ function filters.generic.title(data) end end -function filters.generic.text(data) +function genericfilters.text(data) if data then local entries = data.entries or data.useddata if entries then @@ -1862,7 +1927,7 @@ function filters.generic.text(data) end end -function filters.generic.number(data,what,prefixspec) -- todo: spec and then no stopper +function genericfilters.number(data,what,prefixspec) -- todo: spec and then no stopper if data then numberdata = lists.reordered(data) -- data.numberdata if numberdata then @@ -1877,16 +1942,16 @@ function filters.generic.number(data,what,prefixspec) -- todo: spec and then no end end -filters.generic.default = filters.generic.text +genericfilters.default = genericfilters.text -function filters.generic.page(data,prefixspec,pagespec) +function genericfilters.page(data,prefixspec,pagespec) local pagedata = data.pagedata if pagedata then local number, conversion = pagedata.number, pagedata.conversion if not number then -- error elseif conversion then - context.convertnumber(conversion,number) + ctx_convertnumber(conversion,number) else context(number) end @@ -1895,14 +1960,12 @@ function filters.generic.page(data,prefixspec,pagespec) end end -filters.user = { } - -function filters.user.unknown(data,name) +function userfilters.unknown(data,name) if data then local userdata = data.userdata local userkind = userdata and userdata.kind if userkind then - local filter = filters[userkind] or filters.generic + local filter = filters[userkind] or genericfilters filter = filter and (filter[name] or filter.unknown) if filter then filter(data,name) @@ -1916,9 +1979,7 @@ function filters.user.unknown(data,name) end end -filters.text = { } - -function filters.text.title(data) +function textfilters.title(data) helpers.title(data.entries.text or "?",data.metadata) end @@ -1928,18 +1989,14 @@ end -- helpers.title(data.entries.text or "?",data.metadata) -- end -function filters.text.page(data,prefixspec,pagespec) +function textfilters.page(data,prefixspec,pagespec) helpers.prefixpage(data,prefixspec,pagespec) end -filters.full = { } - -filters.full.title = filters.text.title -filters.full.page = filters.text.page +fullfilters.title = textfilters.title +fullfilters.page = textfilters.page -filters.section = { } - -function filters.section.number(data,what,prefixspec) +function sectionfilters.number(data,what,prefixspec) if data then local numberdata = data.numberdata if not numberdata then @@ -1951,7 +2008,7 @@ function filters.section.number(data,what,prefixspec) local references = data.references if trace_empty then report_empty("reference %a has a hidden number",references.reference) - context.emptyreference() -- maybe an option + ctx_emptyreference() -- maybe an option end else sections.typesetnumber(numberdata,"number",prefixspec,numberdata) @@ -1959,18 +2016,18 @@ function filters.section.number(data,what,prefixspec) end end -filters.section.title = filters.generic.title -filters.section.page = filters.generic.page -filters.section.default = filters.section.number +sectionfilters.title = genericfilters.title +sectionfilters.page = genericfilters.page +sectionfilters.default = sectionfilters.number --- filters.note = { default = filters.generic.number } --- filters.formula = { default = filters.generic.number } --- filters.float = { default = filters.generic.number } --- filters.description = { default = filters.generic.number } --- filters.item = { default = filters.generic.number } +-- filters.note = { default = genericfilters.number } +-- filters.formula = { default = genericfilters.number } +-- filters.float = { default = genericfilters.number } +-- filters.description = { default = genericfilters.number } +-- filters.item = { default = genericfilters.number } setmetatableindex(filters, function(t,k) -- beware, test with rawget - local v = { default = filters.generic.number } -- not copy as it might be extended differently + local v = { default = genericfilters.number } -- not copy as it might be extended differently t[k] = v return v end) @@ -2164,7 +2221,7 @@ runners["special operation with arguments"] = runners["special"] -- check the validity. function specials.internal(var,actions) - local v = references.internals[tonumber(var.operation)] + local v = internals[tonumber(var.operation)] local r = v and v.references.realpage if r then actions.realpage = r @@ -2226,9 +2283,6 @@ end -- needs a better split ^^^ -commands.filterreference = references.filter -commands.filterdefaultreference = references.filterdefault - -- done differently now: function references.export(usedname) end diff --git a/tex/context/base/strc-ref.mkvi b/tex/context/base/strc-ref.mkvi index 85c6a0729..76d79b802 100644 --- a/tex/context/base/strc-ref.mkvi +++ b/tex/context/base/strc-ref.mkvi @@ -193,12 +193,12 @@ \globallet\currentreferencecoding\s!tex \fi % beware, the structures.references.set writes a - % \setnextinternalreference + \setnextinternalreference \strc_references_start_destination_nodes \ctxcommand{setreferenceattribute("\currentreferencekind", "\referenceprefix","\currentreferencelabels", { references = { - % internal = \nextinternalreference, % no need for an internal as we have an explicit + internal = \nextinternalreference, block = "\currentsectionblock", section = structures.sections.currentid(), }, @@ -243,9 +243,11 @@ \lastdestinationattribute\attributeunsetvalue \else \strc_references_start_destination_nodes - \ctxcommand{setreferenceattribute("\s!page", "\referenceprefix","\currentreferencelabels", +\setnextinternalreference + \ctxcommand{setreferenceattribute("\s!page", "\referenceprefix","\currentreferencelabels", { references = { + internal = \nextinternalreference, block = "\currentsectionblock", section = structures.sections.currentid(), }, @@ -264,9 +266,11 @@ \unexpanded\def\strc_references_direct_full#labels#text% {\ifreferencing \strc_references_start_destination_nodes - \ctxcommand{setreferenceattribute("\s!full", "\referenceprefix","#labels", +\setnextinternalreference + \ctxcommand{setreferenceattribute("\s!full", "\referenceprefix","#labels", { references = { + internal = \nextinternalreference, block = "\currentsectionblock", section = structures.sections.currentid(), }, @@ -947,11 +951,12 @@ \begingroup \let\crlf\space \let\\\space - \postponenotes + \postponenotes % might go \referencingparameter\c!left \doifreferencefoundelse{#label} {\goto{\limitatetext\currentreferencetitle{\referencingparameter\c!width}\unknown}[#label]}% not so efficient (dup lookup) {}% todo + \flushnotes % might go \referencingparameter\c!right \endgroup} diff --git a/tex/context/base/strc-reg.lua b/tex/context/base/strc-reg.lua index b0d8a8a25..bdb2e0d67 100644 --- a/tex/context/base/strc-reg.lua +++ b/tex/context/base/strc-reg.lua @@ -13,50 +13,95 @@ local utfchar = utf.char local lpegmatch = lpeg.match local allocate = utilities.storage.allocate -local trace_registers = false trackers.register("structures.registers", function(v) trace_registers = v end) +local trace_registers = false trackers.register("structures.registers", function(v) trace_registers = v end) -local report_registers = logs.reporter("structure","registers") +local report_registers = logs.reporter("structure","registers") -local structures = structures -local registers = structures.registers -local helpers = structures.helpers -local sections = structures.sections -local documents = structures.documents -local pages = structures.pages -local references = structures.references +local structures = structures +local registers = structures.registers +local helpers = structures.helpers +local sections = structures.sections +local documents = structures.documents +local pages = structures.pages +local references = structures.references -local mappings = sorters.mappings -local entries = sorters.entries -local replacements = sorters.replacements +local usedinternals = references.usedinternals -local processors = typesetters.processors -local splitprocessor = processors.split +local mappings = sorters.mappings +local entries = sorters.entries +local replacements = sorters.replacements -local texgetcount = tex.getcount +local processors = typesetters.processors +local splitprocessor = processors.split -local variables = interfaces.variables -local context = context -local commands = commands +local texgetcount = tex.getcount -local matchingtilldepth = sections.matchingtilldepth -local numberatdepth = sections.numberatdepth +local variables = interfaces.variables +local v_forward = variables.forward +local v_all = variables.all +local v_yes = variables.yes +local v_current = variables.current +local v_previous = variables.previous +local v_text = variables.text -local absmaxlevel = 5 -- \c_strc_registers_maxlevel +local context = context +local commands = commands + +local matchingtilldepth = sections.matchingtilldepth +local numberatdepth = sections.numberatdepth +local currentlevel = sections.currentlevel +local currentid = sections.currentid + +local touserdata = helpers.touserdata + +local internalreferences = references.internals +local setinternalreference = references.setinternalreference + +local setmetatableindex = table.setmetatableindex +local texsetattribute = tex.setattribute + +local a_destination = attributes.private('destination') + +local absmaxlevel = 5 -- \c_strc_registers_maxlevel + +local ctx_startregisteroutput = context.startregisteroutput +local ctx_stopregisteroutput = context.stopregisteroutput +local ctx_startregistersection = context.startregistersection +local ctx_stopregistersection = context.stopregistersection +local ctx_startregisterentries = context.startregisterentries +local ctx_stopregisterentries = context.stopregisterentries +local ctx_startregisterentry = context.startregisterentry +local ctx_stopregisterentry = context.stopregisterentry +local ctx_startregisterpages = context.startregisterpages +local ctx_stopregisterpages = context.stopregisterpages +local ctx_stopregisterseewords = context.stopregisterseewords +local ctx_startregisterseewords = context.startregisterseewords +local ctx_registerentry = context.registerentry +local ctx_registerseeword = context.registerseeword +local ctx_registerpagerange = context.registerpagerange +local ctx_registeronepage = context.registeronepage -- some day we will share registers and lists (although there are some conceptual -- differences in the application of keywords) local function filtercollected(names,criterium,number,collected,prevmode) - if not criterium or criterium == "" then criterium = variables.all end - local data = documents.data - local numbers, depth = data.numbers, data.depth - local hash, result, nofresult, all, detail = { }, { }, 0, not names or names == "" or names == variables.all, nil + if not criterium or criterium == "" then + criterium = v_all + end + local data = documents.data + local numbers = data.numbers + local depth = data.depth + local hash = { } + local result = { } + local nofresult = 0 + local all = not names or names == "" or names == v_all + local detail = nil if not all then for s in gmatch(names,"[^, ]+") do hash[s] = true end end - if criterium == variables.all or criterium == variables.text then + if criterium == v_all or criterium == v_text then for i=1,#collected do local v = collected[i] if all then @@ -70,10 +115,11 @@ local function filtercollected(names,criterium,number,collected,prevmode) end end end - elseif criterium == variables.current then + elseif criterium == v_current then + local collectedsections = sections.collected for i=1,#collected do local v = collected[i] - local sectionnumber = sections.collected[v.references.section] + local sectionnumber = collectedsections[v.references.section] if sectionnumber then local cnumbers = sectionnumber.numbers if prevmode then @@ -108,10 +154,11 @@ local function filtercollected(names,criterium,number,collected,prevmode) end end end - elseif criterium == variables.previous then + elseif criterium == v_previous then + local collectedsections = sections.collected for i=1,#collected do local v = collected[i] - local sectionnumber = sections.collected[v.references.section] + local sectionnumber = collectedsections[v.references.section] if sectionnumber then local cnumbers = sectionnumber.numbers if (all or hash[v.metadata.name]) and #cnumbers >= depth then @@ -141,9 +188,9 @@ local function filtercollected(names,criterium,number,collected,prevmode) end elseif criterium == variables["local"] then if sections.autodepth(data.numbers) == 0 then - return filtercollected(names,variables.all,number,collected,prevmode) + return filtercollected(names,v_all,number,collected,prevmode) else - return filtercollected(names,variables.current,number,collected,prevmode) + return filtercollected(names,v_current,number,collected,prevmode) end else -- sectionname, number -- beware, this works ok for registers @@ -193,44 +240,77 @@ registers.filtercollected = filtercollected -- result table; we might do that here as well but since sorting code is -- older we delay that decision +-- maybe store the specification in the format (although we predefine only +-- saved registers) + +local function checker(t,k) + local v = { + metadata = { + language = 'en', + sorted = false, + class = class, + }, + entries = { }, + } + t[k] = v + return v +end + local function initializer() tobesaved = registers.tobesaved collected = registers.collected - local internals = references.internals + setmetatableindex(tobesaved,checker) + setmetatableindex(collected,checker) + local usedinternals = references.usedinternals for name, list in next, collected do local entries = list.entries - for e=1,#entries do - local entry = entries[e] - local r = entry.references - if r then - local internal = r and r.internal - if internal then - internals[internal] = entry + if not list.metadata.notsaved then + for e=1,#entries do + local entry = entries[e] + local r = entry.references + if r then + local internal = r and r.internal + if internal then + internalreferences[internal] = entry + usedinternals[internal] = r.used + end + end + end + end + end +end + +local function finalizer() + local flaginternals = references.flaginternals + for k, v in next, tobesaved do + local entries = v.entries + if entries then + for i=1,#entries do + local r = entries[i].references + if r and flaginternals[r.internal] then + r.used = true end end end end end -job.register('structures.registers.collected', tobesaved, initializer) +job.register('structures.registers.collected', tobesaved, initializer, finalizer) + +setmetatableindex(tobesaved,checker) +setmetatableindex(collected,checker) -local function allocate(class) +local function defineregister(class,method) local d = tobesaved[class] - if not d then - d = { - metadata = { - language = 'en', - sorted = false, - class = class - }, - entries = { }, - } - tobesaved[class] = d - end - return d + if method == v_forward then + d.metadata.notsaved = true + end end -registers.define = allocate +registers.define = defineregister -- 4 times is somewhat over the top but we want consistency +registers.setmethod = defineregister -- and we might have a difference some day +commands.defineregister = defineregister +commands.setregistermethod = defineregister local entrysplitter = lpeg.tsplitat('+') -- & obsolete in mkiv @@ -239,7 +319,6 @@ local tagged = { } local function preprocessentries(rawdata) local entries = rawdata.entries if entries then ---~ table.print(rawdata) local e, k = entries[1] or "", entries[2] or "" local et, kt, entryproc, pageproc if type(e) == "table" then @@ -255,14 +334,15 @@ local function preprocessentries(rawdata) kt = lpegmatch(entrysplitter,k) end entries = { } - for k=1,#et do - entries[k] = { et[k] or "", kt[k] or "" } - end + local ok = false for k=#et,1,-1 do - if entries[k][1] ~= "" then - break - else + local etk = et[k] + local ktk = kt[k] + if not ok and etk == "" then entries[k] = nil + else + entries[k] = { etk or "", ktk ~= "" and ktk or nil } + ok = true end end rawdata.list = entries @@ -277,44 +357,94 @@ local function preprocessentries(rawdata) end end -function registers.store(rawdata) -- metadata, references, entries - local data = allocate(rawdata.metadata.name).entries +local function storeregister(rawdata) -- metadata, references, entries local references = rawdata.references - references.realpage = references.realpage or 0 -- just to be sure as it can be refered to + local metadata = rawdata.metadata + -- checking + if not metadata.kind then + metadata.kind = "entry" + end + -- + if not metadata.catcodes then + metadata.catcodes = tex.catcodetable -- get + end + -- + local name = metadata.name + local notsaved = tobesaved[name].metadata.notsaved + -- + local internal = references.internal + if not internal then + internal = texgetcount("locationcount") -- we assume that it has been set + references.internal = internal + end + -- + if notsaved then + usedinternals[internal] = true -- todo view (we assume that forward references index entries are used) + end + -- + if not references.realpage then + references.realpage = 0 -- just to be sure as it can be refered to + end + -- + local userdata = rawdata.userdata + if userdata then + rawdata.userdata = touserdata(userdata) + end + -- + references.section = currentid() + metadata.level = currentlevel() + -- + local data = notsaved and collected[name] or tobesaved[name] + local entries = data.entries + internalreferences[internal] = rawdata preprocessentries(rawdata) - data[#data+1] = rawdata + entries[#entries+1] = rawdata local label = references.label - if label and label ~= "" then tagged[label] = #data end - context(#data) + if label and label ~= "" then + tagged[label] = #entries + else + references.label = nil + end + return #entries end -function registers.enhance(name,n) - local r = tobesaved[name].entries[n] - if r then - r.references.realpage = texgetcount("realpageno") +local function enhanceregister(name,n) + local data = tobesaved[name].metadata.notsaved and collected[name] or tobesaved[name] + local entry = data.entries[n] + if entry then + entry.references.realpage = texgetcount("realpageno") end end -function registers.extend(name,tag,rawdata) -- maybe do lastsection internally +local function extendregister(name,tag,rawdata) -- maybe do lastsection internally if type(tag) == "string" then tag = tagged[tag] end if tag then - local r = tobesaved[name].entries[tag] - if r then - local rr = r.references - rr.lastrealpage = texgetcount("realpageno") - rr.lastsection = sections.currentid() + local data = tobesaved[name].metadata.notsaved and collected[name] or tobesaved[name] + local entry = data.entries[tag] + if entry then + local references = entry.references + references.lastrealpage = texgetcount("realpageno") + references.lastsection = currentid() if rawdata then + local userdata = rawdata.userdata + if userdata then + rawdata.userdata = touserdata(userdata) + end if rawdata.entries then preprocessentries(rawdata) end - for k,v in next, rawdata do - if not r[k] then - r[k] = v + local metadata = rawdata.metadata + if metadata and not metadata.catcodes then + metadata.catcodes = tex.catcodetable -- get + end + for k, v in next, rawdata do + local rk = references[k] + if not rk then + references[k] = v else - local rk = r[k] - for kk,vv in next, v do + for kk, vv in next, v do if type(vv) == "table" then if next(vv) then rk[kk] = vv @@ -330,6 +460,19 @@ function registers.extend(name,tag,rawdata) -- maybe do lastsection internally end end +registers.store = storeregister +registers.enhance = enhanceregister +registers.extend = extendregister + +function commands.storeregister(rawdata) + local nofentries = storeregister(rawdata) + setinternalreference(nil,nil,rawdata.references.internal) + context(nofentries) +end + +commands.enhanceregister = enhanceregister +commands.extendregister = extendregister + -- sorting and rendering local compare = sorters.comparers.basic @@ -339,7 +482,8 @@ function registers.compare(a,b) if result ~= 0 then return result else - local ka, kb = a.metadata.kind, b.metadata.kind + local ka = a.metadata.kind + local kb = b.metadata.kind if ka == kb then local page_a, page_b = a.references.realpage, b.references.realpage if not page_a or not page_b then @@ -453,17 +597,19 @@ end function registers.prepare(data) -- data has 'list' table - local strip = sorters.strip + local strip = sorters.strip local splitter = sorters.splitters.utf - local result = data.result + local result = data.result if result then for i=1, #result do - local entry, split = result[i], { } - local list = entry.list + local entry = result[i] + local split = { } + local list = entry.list if list then for l=1,#list do - local ll = list[l] - local word, key = ll[1], ll[2] + local ll = list[l] + local word = ll[1] + local key = ll[2] if not key or key == "" then key = word end @@ -478,7 +624,11 @@ function registers.prepare(data) end function registers.sort(data,options) - sorters.sort(data.result,registers.compare) + -- if options.pagenumber == false then + -- sorters.sort(data.result,compare) + -- else + sorters.sort(data.result,registers.compare) + -- end end function registers.unique(data,options) @@ -487,7 +637,8 @@ function registers.unique(data,options) for k=1,#dataresult do local v = dataresult[k] if prev then - local pr, vr = prev.references, v.references + local vr = v.references + local pr = prev.references if not equal(prev.list,v.list) then -- ok elseif pr.realpage ~= vr.realpage then @@ -530,10 +681,11 @@ function registers.finalize(data,options) -- maps character to index (order) if trace_registers then report_registers("splitting at %a",tag) end - done, nofdone = { }, 0 + done = { } + nofdone = 0 nofsplit = nofsplit + 1 + lasttag = tag split[nofsplit] = { tag = tag, data = done } - lasttag = tag end nofdone = nofdone + 1 done[nofdone] = v @@ -541,7 +693,7 @@ function registers.finalize(data,options) -- maps character to index (order) data.result = split end -function registers.analyzed(class,options) +local function analyzeregister(class,options) local data = collected[class] if data and data.entries then options = options or { } @@ -558,10 +710,22 @@ function registers.analyzed(class,options) end end +registers.analyze = analyzeregister + +function registers.analyze(class,options) + context(analyzeregister(class,options)) +end + + -- todo take conversion from index function registers.userdata(index,name) local data = references.internals[tonumber(index)] + return data and data.userdata and data.userdata[name] or nil +end + +function commands.registeruserdata(index,name) + local data = references.internals[tonumber(index)] data = data and data.userdata and data.userdata[name] if data then context(data) @@ -570,22 +734,26 @@ end -- todo: ownnumber +local h_prefixpage = helpers.prefixpage +local h_prefixlastpage = helpers.prefixlastpage +local h_title = helpers.title + local function pagerange(f_entry,t_entry,is_last,prefixspec,pagespec) local fer, ter = f_entry.references, t_entry.references - context.registerpagerange( + ctx_registerpagerange( f_entry.processors and f_entry.processors[2] or "", fer.internal or 0, fer.realpage or 0, function() - helpers.prefixpage(f_entry,prefixspec,pagespec) + h_prefixpage(f_entry,prefixspec,pagespec) end, ter.internal or 0, ter.lastrealpage or ter.realpage or 0, function() if is_last then - helpers.prefixlastpage(t_entry,prefixspec,pagespec) -- swaps page and realpage keys + h_prefixlastpage(t_entry,prefixspec,pagespec) -- swaps page and realpage keys else - helpers.prefixpage (t_entry,prefixspec,pagespec) + h_prefixpage (t_entry,prefixspec,pagespec) end end ) @@ -593,11 +761,11 @@ end local function pagenumber(entry,prefixspec,pagespec) local er = entry.references - context.registeronepage( + ctx_registeronepage( entry.processors and entry.processors[2] or "", er.internal or 0, er.realpage or 0, - function() helpers.prefixpage(entry,prefixspec,pagespec) end + function() h_prefixpage(entry,prefixspec,pagespec) end ) end @@ -665,8 +833,9 @@ local function collapsepages(pages) end function registers.flush(data,options,prefixspec,pagespec) - local collapse_singles = options.compress == variables.yes - local collapse_ranges = options.compress == variables.all + local collapse_singles = options.compress == v_yes + local collapse_ranges = options.compress == v_all + local show_page_number = options.pagenumber ~= false -- true or false local result = data.result local maxlevel = 0 -- @@ -684,18 +853,19 @@ function registers.flush(data,options,prefixspec,pagespec) report_registers("limiting level to %a",maxlevel) end -- - context.startregisteroutput() -local done = { } + ctx_startregisteroutput() + local done = { } + local started = false for i=1,#result do -- ranges need checking ! local sublist = result[i] -- local done = { false, false, false, false } -for i=1,maxlevel do - done[i] = false -end + for i=1,maxlevel do + done[i] = false + end local data = sublist.data local d, n = 0, 0 - context.startregistersection(sublist.tag) + ctx_startregistersection(sublist.tag) for d=1,#data do local entry = data[d] if entry.metadata.kind == "see" then @@ -714,9 +884,9 @@ end d = d + 1 local entry = data[d] local e = { false, false, false } -for i=3,maxlevel do - e[i] = false -end + for i=3,maxlevel do + e[i] = false + end local metadata = entry.metadata local kind = metadata.kind local list = entry.list @@ -727,125 +897,135 @@ end if e[i] ~= done[i] then if e[i] and e[i] ~= "" then done[i] = e[i] -for j=i+1,maxlevel do - done[j] = false -end + for j=i+1,maxlevel do + done[j] = false + end + if started then + ctx_stopregisterentry() + started = false + end if n == i then - context.stopregisterentries() - context.startregisterentries(n) +-- ctx_stopregisterentries() +-- ctx_startregisterentries(n) else while n > i do n = n - 1 - context.stopregisterentries() + ctx_stopregisterentries() end while n < i do n = n + 1 - context.startregisterentries(n) + ctx_startregisterentries(n) end end - local internal = entry.references.internal or 0 - local seeparent = entry.references.seeparent or "" - local processor = entry.processors and entry.processors[1] or "" + local references = entry.references + local processors = entry.processors + local internal = references.internal or 0 + local seeparent = references.seeparent or "" + local processor = processors and processors[1] or "" -- so, we need to keep e as is (local), or we need local title = e[i] ... which might be -- more of a problem + ctx_startregisterentry(0) -- will become a counter + started = true if metadata then - context.registerentry(processor,internal,seeparent,function() helpers.title(e[i],metadata) end) + ctx_registerentry(processor,internal,seeparent,function() h_title(e[i],metadata) end) else -- ? - context.registerentry(processor,internal,seeindex,e[i]) + ctx_registerentry(processor,internal,seeindex,e[i]) end else done[i] = false -for j=i+1,maxlevel do - done[j] = false -end + for j=i+1,maxlevel do + done[j] = false + end end end end if kind == 'entry' then - context.startregisterpages() - if collapse_singles or collapse_ranges then - -- we collapse ranges and keep existing ranges as they are - -- so we get prebuilt as well as built ranges - local first, last, prev, pages, dd, nofpages = entry, nil, entry, { }, d, 0 - while dd < #data do - dd = dd + 1 - local next = data[dd] - if next and next.metadata.kind == "see" then - dd = dd - 1 - break - else - local el, nl = entry.list, next.list - if not equal(el,nl) then + if show_page_number then + ctx_startregisterpages() + if collapse_singles or collapse_ranges then + -- we collapse ranges and keep existing ranges as they are + -- so we get prebuilt as well as built ranges + local first, last, prev, pages, dd, nofpages = entry, nil, entry, { }, d, 0 + while dd < #data do + dd = dd + 1 + local next = data[dd] + if next and next.metadata.kind == "see" then dd = dd - 1 - --~ first = nil break - elseif next.references.lastrealpage then - nofpages = nofpages + 1 - pages[nofpages] = first and { first, last or first } or { entry, entry } - nofpages = nofpages + 1 - pages[nofpages] = { next, next } - first, last, prev = nil, nil, nil - elseif not first then - first, prev = next, next - elseif next.references.realpage - prev.references.realpage == 1 then -- 1 ? - last, prev = next, next else - nofpages = nofpages + 1 - pages[nofpages] = { first, last or first } - first, last, prev = next, nil, next + local el, nl = entry.list, next.list + if not equal(el,nl) then + dd = dd - 1 + --~ first = nil + break + elseif next.references.lastrealpage then + nofpages = nofpages + 1 + pages[nofpages] = first and { first, last or first } or { entry, entry } + nofpages = nofpages + 1 + pages[nofpages] = { next, next } + first, last, prev = nil, nil, nil + elseif not first then + first, prev = next, next + elseif next.references.realpage - prev.references.realpage == 1 then -- 1 ? + last, prev = next, next + else + nofpages = nofpages + 1 + pages[nofpages] = { first, last or first } + first, last, prev = next, nil, next + end end end - end - if first then - nofpages = nofpages + 1 - pages[nofpages] = { first, last or first } - end - if collapse_ranges and nofpages > 1 then - nofpages = collapsepages(pages) - end - if nofpages > 0 then -- or 0 - d = dd - for p=1,nofpages do - local first, last = pages[p][1], pages[p][2] - if first == last then - if first.references.lastrealpage then - pagerange(first,first,true,prefixspec,pagespec) + if first then + nofpages = nofpages + 1 + pages[nofpages] = { first, last or first } + end + if collapse_ranges and nofpages > 1 then + nofpages = collapsepages(pages) + end + if nofpages > 0 then -- or 0 + d = dd + for p=1,nofpages do + local first, last = pages[p][1], pages[p][2] + if first == last then + if first.references.lastrealpage then + pagerange(first,first,true,prefixspec,pagespec) + else + pagenumber(first,prefixspec,pagespec) + end + elseif last.references.lastrealpage then + pagerange(first,last,true,prefixspec,pagespec) else - pagenumber(first,prefixspec,pagespec) + pagerange(first,last,false,prefixspec,pagespec) end - elseif last.references.lastrealpage then - pagerange(first,last,true,prefixspec,pagespec) - else - pagerange(first,last,false,prefixspec,pagespec) end - end - elseif entry.references.lastrealpage then - pagerange(entry,entry,true,prefixspec,pagespec) - else - pagenumber(entry,prefixspec,pagespec) - end - else - while true do - if entry.references.lastrealpage then + elseif entry.references.lastrealpage then pagerange(entry,entry,true,prefixspec,pagespec) else pagenumber(entry,prefixspec,pagespec) end - if d == #data then - break - else - d = d + 1 - local next = data[d] - if next.metadata.kind == "see" or not equal(entry.list,next.list) then - d = d - 1 + else + while true do + if entry.references.lastrealpage then + pagerange(entry,entry,true,prefixspec,pagespec) + else + pagenumber(entry,prefixspec,pagespec) + end + if d == #data then break else - entry = next + d = d + 1 + local next = data[d] + if next.metadata.kind == "see" or not equal(entry.list,next.list) then + d = d - 1 + break + else + entry = next + end end end end + ctx_stopregisterpages() end - context.stopregisterpages() elseif kind == 'see' then local t, nt = { }, 0 while true do @@ -864,38 +1044,46 @@ end end end end - context.startregisterseewords() + ctx_startregisterseewords() for i=1,nt do local entry = t[i] local seeword = entry.seeword local seetext = seeword.text or "" local processor = seeword.processor or (entry.processors and entry.processors[1]) or "" local seeindex = entry.references.seeindex or "" - context.registerseeword(i,n,processor,0,seeindex,seetext) + ctx_registerseeword(i,n,processor,0,seeindex,seetext) end - context.stopregisterseewords() + ctx_stopregisterseewords() end end + if started then + ctx_stopregisterentry() + started = false + end while n > 0 do - context.stopregisterentries() + ctx_stopregisterentries() n = n - 1 end - context.stopregistersection() + ctx_stopregistersection() end - context.stopregisteroutput() + ctx_stopregisteroutput() -- for now, maybe at some point we will do a multipass or so data.result = nil data.metadata.sorted = false + -- temp hack for luajittex : + local entries = data.entries + for i=1,#entries do + entries[i].split = nil + end + -- collectgarbage("collect") end - -function registers.analyze(class,options) - context(registers.analyzed(class,options)) -end - -function registers.process(class,...) - if registers.analyzed(class,...) > 0 then - registers.flush(collected[class],...) +local function processregister(class,...) + if analyzeregister(class,...) > 0 then + local data = collected[class] + registers.flush(data,...) end end +registers.process = processregister +commands.processregister = processregister diff --git a/tex/context/base/strc-reg.mkiv b/tex/context/base/strc-reg.mkiv index 2d28114c3..d072aca69 100644 --- a/tex/context/base/strc-reg.mkiv +++ b/tex/context/base/strc-reg.mkiv @@ -17,6 +17,8 @@ \unprotect +\startcontextdefinitioncode + % todo: tag:: becomes rendering % todo: language, character, linked, location % todo: fonts etc at sublevels (already defined) @@ -106,6 +108,14 @@ \c!entries=, \c!alternative=] + +\definemixedcolumns + [\v!register] + [\c!n=\registerparameter\c!n, + \c!balance=\registerparameter\c!balance, + \c!align=\registerparameter\c!align, + \c!tolerance=\registerparameter\c!tolerance] + %D \starttyping %D \setupregister[index][1][textcolor=darkred] %D \setupregister[index][2][textcolor=darkgreen,textstyle=bold] @@ -123,7 +133,8 @@ \appendtoks \ifconditional\c_strc_registers_defining \else % todo: dosingle ... \settrue\c_strc_registers_defining - \ctxlua{structures.registers.define('\currentregister')}% + \definemixedcolumns[\currentregister][\v!register]% first as otherwise it overloads start/stop + \ctxcommand{defineregister("\currentregister","\registerparameter\c!referencemethod")}% \normalexpanded{\presetheadtext[\currentregister=\Word{\currentregister}]}% \setuevalue{\currentregister}{\dodoubleempty\strc_registers_insert_entry[\currentregister]}% \setuevalue{\e!see\currentregister}{\dodoubleempty\strc_registers_insert_see[\currentregister]}% @@ -143,6 +154,10 @@ \fi \to \everydefineregister +\appendtoks + \ctxcommand{setregistermethod("\currentregister","\registerparameter\c!referencemethod")}% +\to \everysetupregister + %D Registering: \def\strc_registers_register_page_entry @@ -152,6 +167,52 @@ \expandafter\strc_registers_register_page_entry_indeed \fi} +\def\strc_registers_register_page_expand_xml_entries + {\xmlstartraw + \xdef\currentregisterentriesa{\registerparameter{\c!entries:1}}% + \xdef\currentregisterentriesb{\registerparameter{\c!entries:2}}% + \xdef\currentregisterentriesc{\registerparameter{\c!entries:3}}% + \xmlstopraw + \globallet\currentregistercoding\s!xml} + +\def\strc_registers_register_page_expand_yes_entries + {\xdef\currentregisterentriesa{\registerparameter{\c!entries:1}}% + \xdef\currentregisterentriesb{\registerparameter{\c!entries:2}}% + \xdef\currentregisterentriesc{\registerparameter{\c!entries:3}}% + \globallet\currentregistercoding\s!tex} + +\def\strc_registers_register_page_expand_nop_entries + {\xdef\currentregisterentriesa{\detokenizedregisterparameter{\c!entries:1}}% + \xdef\currentregisterentriesb{\detokenizedregisterparameter{\c!entries:2}}% + \xdef\currentregisterentriesc{\detokenizedregisterparameter{\c!entries:3}}% + \globallet\currentregistercoding\s!tex} + +\def\strc_registers_register_page_expand_xml + {\xmlstartraw + \xdef\currentregisterentries{\registerparameter\c!entries}% + \xmlstopraw + \globallet\currentregistercoding\s!xml} + +\def\strc_registers_register_page_expand_yes + {\xdef\currentregisterentries{\registerparameter\c!entries}% + \globallet\currentregistercoding\s!tex} + +\def\strc_registers_register_page_expand_nop + {\xdef\currentregisterentries{\detokenizedregisterparameter\c!entries}% + \globallet\currentregistercoding\s!tex} + +\def\strc_registers_register_page_expand_xml_keys + {\xmlstartraw + \xdef\currentregisterkeysa{\registerparameter{\c!keys:1}}% + \xdef\currentregisterkeysb{\registerparameter{\c!keys:2}}% + \xdef\currentregisterkeysc{\registerparameter{\c!keys:3}}% + \xmlstopraw} + +\def\strc_registers_register_page_expand_yes_keys + {\xdef\currentregisterkeysa{\registerparameter{\c!keys:1}}% + \xdef\currentregisterkeysb{\registerparameter{\c!keys:2}}% + \xdef\currentregisterkeysc{\registerparameter{\c!keys:3}}} + \def\strc_registers_register_page_entry_indeed#1#2#3% register data userdata {\begingroup \edef\currentregister{#1}% @@ -165,75 +226,54 @@ \xdef\currentregisterxmlsetup {\registerparameter\c!xmlsetup}% \ifx\currentregisterentries\empty \ifx\currentregisterexpansion\s!xml - \xmlstartraw - \xdef\currentregisterentriesa{\registerparameter{\c!entries:1}}% - \xdef\currentregisterentriesb{\registerparameter{\c!entries:2}}% - \xdef\currentregisterentriesc{\registerparameter{\c!entries:3}}% - \xmlstopraw - \globallet\currentregistercoding\s!xml + \strc_registers_register_page_expand_xml_entries + \else\ifx\currentregisterexpansion\v!yes + \strc_registers_register_page_expand_yes_entries \else - \ifx\currentregisterexpansion\v!yes - \xdef\currentregisterentriesa{\registerparameter{\c!entries:1}}% - \xdef\currentregisterentriesb{\registerparameter{\c!entries:2}}% - \xdef\currentregisterentriesc{\registerparameter{\c!entries:3}}% - \else - \xdef\currentregisterentriesa{\detokenizedregisterparameter{\c!entries:1}}% - \xdef\currentregisterentriesb{\detokenizedregisterparameter{\c!entries:2}}% - \xdef\currentregisterentriesc{\detokenizedregisterparameter{\c!entries:3}}% - \fi - \globallet\currentregistercoding\s!tex - \fi + \strc_registers_register_page_expand_nop_entries + \fi\fi \else \ifx\currentregisterexpansion\s!xml - \xmlstartraw - \xdef\currentregisterentries{\registerparameter\c!entries}% - \xmlstopraw - \globallet\currentregistercoding\s!xml + \strc_registers_register_page_expand_xml + \else\ifx\currentregisterexpansion\v!yes + \strc_registers_register_page_expand_yes \else - \ifx\currentregisterexpansion\v!yes - \xdef\currentregisterentries{\registerparameter\c!entries}% - \else - \xdef\currentregisterentries{\detokenizedregisterparameter\c!entries}% - \fi - \globallet\currentregistercoding\s!tex - \fi + \strc_registers_register_page_expand_nop + \fi\fi \fi \ifx\currentregisterkeys\empty \ifx\currentregistercoding\s!xml - \xmlstartraw - \xdef\currentregisterkeysa{\registerparameter{\c!keys:1}}% - \xdef\currentregisterkeysb{\registerparameter{\c!keys:2}}% - \xdef\currentregisterkeysc{\registerparameter{\c!keys:3}}% - \xmlstopraw + \strc_registers_register_page_expand_xml_keys \else - \xdef\currentregisterkeysa{\registerparameter{\c!keys:1}}% - \xdef\currentregisterkeysb{\registerparameter{\c!keys:2}}% - \xdef\currentregisterkeysc{\registerparameter{\c!keys:3}}% + \strc_registers_register_page_expand_yes_keys \fi \fi \setnextinternalreference % we could consider storing register entries in a list which we % could then sort - \xdef\currentregisternumber{\ctxlua{ - structures.registers.store { % 'own' should not be in metadata + \xdef\currentregisternumber{\ctxcommand{storeregister{ % 'own' should not be in metadata metadata = { - kind = "entry", + % kind = "entry", name = "\currentregister", - level = structures.sections.currentlevel(), + % level = structures.sections.currentlevel(), coding = "\currentregistercoding", - catcodes = \the\catcodetable, + % catcodes = \the\catcodetable, \ifx\currentregisterownnumber\v!yes own = "\registerparameter\c!alternative", % can be used instead of pagenumber \fi - xmlroot = \ifx\currentreferencecoding\s!xml "\xmldocument" \else nil \fi, % only useful when text + \ifx\currentreferencecoding\s!xml + xmlroot = "\xmldocument", % only useful when text + \fi \ifx\currentregisterxmlsetup\empty \else xmlsetup = "\currentregisterxmlsetup", \fi }, references = { - internal = \nextinternalreference, - section = structures.sections.currentid(), % hm, why then not also lastsection the same way + % internal = \nextinternalreference, + % section = structures.sections.currentid(), % hm, why then not also lastsection the same way + \ifx\currentregisterlabel\empty \else label = "\currentregisterlabel", + \fi }, % \ifx\currentregisterentries\empty \else entries = { @@ -253,11 +293,11 @@ userdata = structures.helpers.touserdata(\!!bs\detokenize{#3}\!!es) } }}% - \ctxlua{structures.references.setinternalreference(nil,nil,\nextinternalreference)}% + % \ctxcommand{setinternalreference(nil,nil,\nextinternalreference)}% in previous \ifx\currentregisterownnumber\v!yes \glet\currentregistersynchronize\relax \else - \xdef\currentregistersynchronize{\ctxlatelua{structures.registers.enhance("\currentregister",\currentregisternumber)}}% + \xdef\currentregistersynchronize{\ctxlatecommand{enhanceregister("\currentregister",\currentregisternumber)}}% \fi \currentregistersynchronize % here? % needs thinking ... bla\index{bla}. will break before the . but adding a @@ -296,7 +336,7 @@ \fi} \def\strc_registers_stop_entry[#1][#2]% - {\normalexpanded{\ctxlatelua{structures.registers.extend("#1","#2")}}} + {\normalexpanded{\ctxlatecommand{extendregister("#1","#2")}}} \def\setregisterentry {\dotripleempty\strc_registers_set_entry} \def\finishregisterentry{\dotripleempty\strc_registers_finish_entry} @@ -329,19 +369,19 @@ \fi % I hate this kind of mess ... but it's a user request. \ifx\currentregisterentries\empty - \normalexpanded{\ctxlua{structures.registers.extend("\currentregister","\currentregisterlabel", { + \normalexpanded{\ctxcommand{extendregister("\currentregister","\currentregisterlabel", { metadata = { \ifx\currentregisterownnumber\v!yes own = "\registerparameter\c!alternative", % can be used instead of pagenumber \fi }, - userdata = structures.helpers.touserdata(\!!bs\detokenize{#3}\!!es) + userdata = \!!bs\detokenize{#3}\!!es })% }}% \else - \normalexpanded{\ctxlua{structures.registers.extend("\currentregister","\currentregisterlabel", { + \normalexpanded{\ctxcommand{extendregister("\currentregister","\currentregisterlabel", { metadata = { - catcodes = \the\catcodetable, + % catcodes = \the\catcodetable, coding = "\currentregistercoding", \ifx\currentregisterownnumber\v!yes own = "\registerparameter\c!alternative", % can be used instead of pagenumber @@ -352,7 +392,7 @@ \!!bs\currentregisterentries\!!es, \!!bs\currentregisterkeys\!!es }, - userdata = structures.helpers.touserdata(\!!bs\detokenize{#3}\!!es) + userdata = \!!bs\detokenize{#3}\!!es }) }}% \fi @@ -374,7 +414,7 @@ % \placeregister[index][n=1] % \stoptext -% some overlap wit previous +% some overlap with previous \unexpanded\def\setstructurepageregister {\dotripleempty\strc_registers_set} @@ -421,16 +461,16 @@ \fi \setnextinternalreference % we could consider storing register entries in list - \edef\temp{\ctxlua{ structures.registers.store { + \edef\temp{\ctxcommand{storeregister{ metadata = { kind = "see", name = "\currentregister", - level = structures.sections.currentlevel(), - catcodes = \the\catcodetable, + % level = structures.sections.currentlevel(), + % catcodes = \the\catcodetable, }, references = { - internal = \nextinternalreference, - section = structures.sections.currentid(), + % internal = \nextinternalreference, + % section = structures.sections.currentid(), }, entries = { % we need a special one for xml, this is just a single one @@ -457,12 +497,13 @@ {\begingroup \edef\currentregister{#1}% \setupregister[\currentregister][#2]% - \normalexpanded{\endgroup\noexpand\xdef\noexpand\utilityregisterlength{\ctxlua{structures.registers.analyze('\currentregister',{ + \normalexpanded{\endgroup\noexpand\xdef\noexpand\utilityregisterlength{\ctxcommand{analyzeregister('\currentregister',{ language = "\registerparameter\s!language", method = "\registerparameter\c!method", numberorder = "\registerparameter\c!numberorder", compress = "\registerparameter\c!compress", criterium = "\registerparameter\c!criterium", + pagenumber = \ifx\registerpageseparatorsymbol\empty false\else true\fi, })}}}% brrr \ifcase\utilityregisterlength\relax \resetsystemmode\v!register @@ -479,6 +520,27 @@ \unexpanded\def\placeregister {\dodoubleempty\strc_registers_place} +% \def\strc_registers_place[#1][#2]% +% {\iffirstargument +% \begingroup +% %\forgetall +% \edef\currentregister{#1}% +% \setupregister[\currentregister][#2]% +% \the\everyplaceregister +% \ifnum\registerparameter\c!n>\plusone +% \startcolumns +% [\c!n=\registerparameter\c!n, +% \c!balance=\registerparameter\c!balance, +% \c!align=\registerparameter\c!align, +% \c!tolerance=\registerparameter\c!tolerance]% +% \strc_registers_place_indeed +% \stopcolumns +% \else +% \strc_registers_place_indeed +% \fi +% \endgroup +% \fi} + \def\strc_registers_place[#1][#2]% {\iffirstargument \begingroup @@ -486,43 +548,36 @@ \edef\currentregister{#1}% \setupregister[\currentregister][#2]% \the\everyplaceregister - \ifnum\registerparameter\c!n>\plusone - \startcolumns - [\c!n=\registerparameter\c!n, - \c!balance=\registerparameter\c!balance, - \c!align=\registerparameter\c!align, - \c!tolerance=\registerparameter\c!tolerance]% - \strc_registers_place_indeed - \stopcolumns + \ifnum\namedmixedcolumnsparameter\currentregister\c!n>\plusone + \startmixedcolumns[\currentregister] + \strc_registers_place_indeed + \stopmixedcolumns \else \strc_registers_place_indeed \fi \endgroup \fi} -\def\strc_registers_place_columns - {\startcolumns - [\c!n=\registerparameter\c!n, - \c!balance=\registerparameter\c!balance, - \c!align=\registerparameter\c!align, - \c!tolerance=\registerparameter\c!tolerance]% - \startpacked[\v!blank]% - \strc_registers_place_indeed - \stoppacked - \stopcolumns} - -\def\strc_registers_place_normal - {\startpacked[\v!blank]% - \strc_registers_place_indeed - \stoppacked} +% \def\strc_registers_place_columns +% {\startmixedcolumns[\currentregister] +% \startpacked[\v!blank]% +% \strc_registers_place_indeed +% \stoppacked +% \stopmixedcolumns} +% +% \def\strc_registers_place_normal +% {\startpacked[\v!blank]% +% \strc_registers_place_indeed +% \stoppacked} \def\strc_registers_place_indeed - {\ctxlua{structures.registers.process('\currentregister',{ + {\ctxcommand{processregister('\currentregister',{ language = "\registerparameter\s!language", method = "\registerparameter\c!method", numberorder = "\registerparameter\c!numberorder", compress = "\registerparameter\c!compress", criterium = "\registerparameter\c!criterium", + pagenumber = \ifx\registerpageseparatorsymbol\empty false\else true\fi, },{ separatorset = "\registerparameter\c!pageprefixseparatorset", conversionset = "\registerparameter\c!pageprefixconversionset", @@ -685,6 +740,9 @@ % \hangafter\plusone % \let\currentregister\savedcurrentregister} +\newdimen\d_strc_registers_hangindent +\newcount\c_strc_registers_hangafter + \unexpanded\def\startregisterentries#1% depth {\endgraf \begingroup @@ -696,8 +754,9 @@ \ifnum\scratchcounter>\plusone \advance\leftskip\d_strc_registers_distance\relax \fi - \hangindent\registerparameter\c!distance\relax - \hangafter\plusone + \d_strc_registers_hangindent\registerparameter\c!distance\relax + \c_strc_registers_hangafter \plusone +\blank[\v!samepage]% \let\currentregister\savedcurrentregister} \unexpanded\def\stopregisterentries @@ -705,6 +764,15 @@ \dostoptagged \endgroup} +\unexpanded\def\startregisterentry#1% todo: level + {\begingroup + \hangindent\d_strc_registers_hangindent + \hangafter \c_strc_registers_hangafter} + +\unexpanded\def\stopregisterentry + {\endgraf + \endgroup} + \unexpanded\def\startregistersection#1% title {\dostarttagged\t!registersection\empty \dostarttagged\t!registertag\empty @@ -745,7 +813,7 @@ \fi} \unexpanded\def\registeronepagerangeseparator - {|\endash|} + {|\endash|} % todo use \prewordbreak \def\withregisterpagecommand#1#2#3#4% {\def\currentregisterpageindex{#2}% @@ -846,7 +914,7 @@ % \placeregister[index][n=1,pagecommand=\MyRegisterPageCommand] % \stoptext -\def\registerpageuserdata #1#2{\ctxlua{structures.registers.userdata(#1,"#2")}} +\def\registerpageuserdata #1#2{\ctxcommand{registeruserdata(#1,"#2")}} \def\currentregisterpageuserdata {\registerpageuserdata\currentregisterpageindex} % {#1} % not yet ok : new internal handler names @@ -857,10 +925,10 @@ \installcorenamespace{registersymbol} \setvalue{\??registersymbol n}% - {\def\registerpageseparatorsymbol{, }} + {\def\registerpageseparatorsymbol{,\space}} \setvalue{\??registersymbol a}% - {\def\registerpageseparatorsymbol{, }} % now done via conversion + {\def\registerpageseparatorsymbol{,\space}} % now done via conversion \setvalue{\??registersymbol\v!none}% {\let\registerpageseparatorsymbol\empty @@ -904,4 +972,6 @@ [\v!index] % [\v!indices] +\stopcontextdefinitioncode + \protect \endinput diff --git a/tex/context/base/strc-rsc.lua b/tex/context/base/strc-rsc.lua index a90f577e3..e2105a4ef 100644 --- a/tex/context/base/strc-rsc.lua +++ b/tex/context/base/strc-rsc.lua @@ -67,11 +67,11 @@ local function splitreference(str) local t = lpegmatch(referencesplitter,str) if t then local a = t.arguments - if a and find(a,"\\") then + if a and find(a,"\\",1,true) then t.has_tex = true else local o = t.arguments - if o and find(o,"\\") then + if o and find(o,"\\",1,true) then t.has_tex = true end end diff --git a/tex/context/base/strc-sec.mkiv b/tex/context/base/strc-sec.mkiv index 2962e2c49..122892104 100644 --- a/tex/context/base/strc-sec.mkiv +++ b/tex/context/base/strc-sec.mkiv @@ -15,6 +15,8 @@ \unprotect +\startcontextdefinitioncode + \installcorenamespace{structure} \installdirectcommandhandler \??structure {structure} % unchecked, so we need to initialize used parameters @@ -101,8 +103,11 @@ {\setfalse\c_strc_bookmarks_preroll} \def\strc_sectioning_autobookmark#1% - {\nodestostring\tempstring{#1}% - \globallet\currentstructurebookmark\tempstring} + {\begingroup + \the\everypreroll + \nodestostring\tempstring{#1}% + \globallet\currentstructurebookmark\tempstring + \endgroup} % so it's an experiment @@ -130,9 +135,9 @@ \xdef\currentstructuremarking {\structureparameter\c!marking}% \xdef\currentstructurelist {\structureparameter\c!list}% \xmlstopraw -\iflocation \ifx\currentstructurebookmark\empty \ifconditional\c_strc_bookmarks_preroll - \strc_sectioning_autobookmark\currentstructuretitle -\fi \fi \fi + \iflocation \ifx\currentstructurebookmark\empty \ifconditional\c_strc_bookmarks_preroll + \strc_sectioning_autobookmark\currentstructuretitle + \fi \fi \fi \ifx\currentstructurelist\empty \globallet\currentstructurelist\currentstructuretitle \fi @@ -143,23 +148,23 @@ \xdef\currentstructurebookmark{\structureparameter\c!bookmark}% \xdef\currentstructuremarking {\structureparameter\c!marking}% \xdef\currentstructurelist {\structureparameter\c!list}% -\iflocation \ifx\currentstructurebookmark\empty \ifconditional\c_strc_bookmarks_preroll - \strc_sectioning_autobookmark\currentstructuretitle -\fi \fi \fi + \iflocation \ifx\currentstructurebookmark\empty \ifconditional\c_strc_bookmarks_preroll + \strc_sectioning_autobookmark\currentstructuretitle + \fi \fi \fi \else \xdef\currentstructuretitle {\detokenizedstructureparameter\c!title}% \xdef\currentstructurebookmark{\detokenizedstructureparameter\c!bookmark}% \xdef\currentstructuremarking {\detokenizedstructureparameter\c!marking}% \xdef\currentstructurelist {\detokenizedstructureparameter\c!list}% \iflocation \ifx\currentstructurebookmark\empty -\ifconditional\c_strc_bookmarks_preroll - \strc_sectioning_autobookmark{\structureparameter\c!title}% -\else - \begingroup - \simplifycommands - \xdef\currentstructurebookmark{\detokenize\expandafter{\normalexpanded{\structureparameter\c!title}}}% - \endgroup -\fi + \ifconditional\c_strc_bookmarks_preroll + \strc_sectioning_autobookmark{\structureparameter\c!title}% + \else + \begingroup + \simplifycommands + \xdef\currentstructurebookmark{\detokenize\expandafter{\normalexpanded{\structureparameter\c!title}}}% + \endgroup + \fi \fi \fi \fi \ifx\currentstructurelist\empty @@ -170,8 +175,8 @@ \setnextinternalreference \storeinternalreference\currentstructurename\nextinternalreference % \strc_sectioning_set_reference_prefix - \xdef\currentstructurenumber{\ctxlua{ % todo: combine with next call, adapt marks accordingly - structures.sections.somelevel { + \ctxcommand{% todo: combine with next call, adapt marks accordingly + setsectionentry{ references = { internal = \nextinternalreference, block = "\currentsectionblock", @@ -218,7 +223,9 @@ numberdata = { % needed ? block = "\currentsectionblock", - hidenumber = \ifx\currentstructureshownumber\v!no true\else nil\fi, % titles + \ifx\currentstructureshownumber\v!no + hidenumber = true, % titles + \fi % so far separatorset = "\structureparameter\c!sectionseparatorset", conversion = "\structureparameter\c!sectionconversion", % for good old times sake @@ -231,14 +238,12 @@ }, userdata = \!!bs\detokenize{#3}\!!es % will be converted to table at the lua end } - }}% - % \xdef\currentstructurelistnumber{\ctxcommand{addtolist(structures.sections.current())}}% + }% \xdef\currentstructurelistnumber{\ctxcommand{currentsectiontolist()}}% % \currentstructuresynchronize has to be called someplace, since it introduces a node \setstructuresynchronization\currentstructurelistnumber \endgroup} -\let\currentstructurenumber \!!zerocount \let\currentsectioncountervalue \!!zerocount % redefined later \let\previoussectioncountervalue\!!zerocount % redefined later @@ -300,14 +305,14 @@ \newconditional\c_strc_rendering_continuous % not used (mkii ?) -\def\setstructurelevel #1#2{\ctxlua{structures.sections.setlevel("#1","#2")}} % name, level|parent -\def\getstructurelevel #1{\ctxlua{structures.sections.getcurrentlevel("#1")}}% name -\def\setstructurenumber #1#2{\ctxlua{structures.sections.setnumber(#1,"#2")}} % level, number (+/-) -\def\getstructurenumber #1{\ctxlua{structures.sections.getnumber(#1)}} % level -\def\getsomestructurenumber #1#2{\ctxlua{structures.sections.getnumber(#1,"#2")}} % level, what -\def\getfullstructurenumber #1{\ctxlua{structures.sections.fullnumber(#1)}} % level -\def\getsomefullstructurenumber#1#2{\ctxlua{structures.sections.fullnumber(#1,"#2")}} -\def\getspecificstructuretitle #1{\ctxlua{structures.sections.structuredata("#1","titledata.title",nil,"\headparameter\s!catcodes")}}% +\def\setstructurelevel #1#2{\ctxcommand{setsectionlevel("#1","#2")}} % name, level|parent +\def\getstructurelevel #1{\ctxcommand{getcurrentsectionlevel("#1")}}% name +\def\setstructurenumber #1#2{\ctxcommand{setsectionnumber(#1,"#2")}} % level, number (+/-) +\def\getstructurenumber #1{\ctxcommand{getsectionnumber(#1)}} % level +\def\getsomestructurenumber #1#2{\ctxcommand{getsectionnumber(#1,"#2")}} % level, what +\def\getfullstructurenumber #1{\ctxcommand{getfullsectionnumber(#1)}} % level +\def\getsomefullstructurenumber#1#2{\ctxcommand{getfullsectionnumber(#1,"#2")}} +\def\getspecificstructuretitle #1{\ctxcommand{getstructuredata("#1","titledata.title",nil,"\headparameter\s!catcodes")}}% % will be: % @@ -435,7 +440,7 @@ \edef\currentsectionheadcoupling{\sectionheadcoupling\currenthead}% \edef\currentsectionheadsection {\sectionheadsection \currentsectionheadcoupling}% \edef\currentsectionlevel {\sectionlevel \currentsectionheadsection}% - \ctxlua{structures.sections.register("\currenthead",{ + \ctxcommand{registersection("\currenthead",{ coupling = "\currentsectionheadcoupling", section = "\currentsectionheadsection", level = \currentsectionlevel, @@ -578,8 +583,8 @@ % head -> head -\def\sectionheadmarkingtitle #1#2{\ctxlua{structures.marks.title("#1","#2")}} -\def\sectionheadmarkingnumber#1#2{\ctxlua{structures.marks.number("#1","#2")}} +\def\sectionheadmarkingtitle #1#2{\ctxcommand{markingtitle("#1","#2")}} +\def\sectionheadmarkingnumber#1#2{\ctxcommand{markingnumber("#1","#2")}} \def\sectionheadcoupling#1{\namedheadparameter{#1}\c!coupling} \def\sectionheadsection #1{\namedheadparameter{#1}\c!section} @@ -763,7 +768,7 @@ \unexpanded\def\placeheadtext {\dosingleempty\strc_sectioning_place_head_text } % use with care \unexpanded\def\placeheadnumber{\dosingleempty\strc_sectioning_place_head_number} % use with care -\unexpanded\def\strc_sectioning_report{\ctxlua{structures.sections.reportstructure()}} +\unexpanded\def\strc_sectioning_report{\ctxcommand{reportstructure()}} \ifdefined\strc_rendering_initialize_style_and_color \else @@ -1039,8 +1044,8 @@ #1% \fi} -\def\currentsectioncountervalue {\ctxlua{structures.sections.depthnumber(\thenamedheadlevel\currenthead)}} -\def\previoussectioncountervalue{\ctxlua{structures.sections.depthnumber(\thenamedheadlevel\currenthead-1)}} +\def\currentsectioncountervalue {\ctxcommand{depthnumber(\thenamedheadlevel\currenthead)}} +\def\previoussectioncountervalue{\ctxcommand{depthnumber(\thenamedheadlevel\currenthead-1)}} \def\strc_sectioning_handle_page_nop {\edef\p_continue{\headparameter\c!continue}% @@ -1119,7 +1124,7 @@ \let\sectioncountervalue\structurevalue -\def\currentheadtext{obsolete, use marks} +\def\currentheadtext{obsolete,\space use marks} % list references, will be redone in lua when we need it @@ -1154,4 +1159,6 @@ \finalizeautostructurelevels \to \everystoptext +\stopcontextdefinitioncode + \protect \endinput diff --git a/tex/context/base/strc-syn.lua b/tex/context/base/strc-syn.lua index ca4b3ac18..604365b2d 100644 --- a/tex/context/base/strc-syn.lua +++ b/tex/context/base/strc-syn.lua @@ -12,6 +12,9 @@ local allocate = utilities.storage.allocate -- interface to tex end +local context = context +local sorters = sorters + local structures = structures local synonyms = structures.synonyms local tags = structures.tags @@ -19,6 +22,10 @@ local tags = structures.tags local collected = allocate() local tobesaved = allocate() +local firstofsplit = sorters.firstofsplit +local strip = sorters.strip +local splitter = sorters.splitters.utf + synonyms.collected = collected synonyms.tobesaved = tobesaved @@ -114,8 +121,6 @@ function synonyms.filter(data,options) end function synonyms.prepare(data) - local strip = sorters.strip - local splitter = sorters.splitters.utf local result = data.result if result then for i=1, #result do @@ -123,7 +128,7 @@ function synonyms.prepare(data) local rd = r.definition if rd then local rt = rd.tag - local sortkey = (rt and rt ~= "" and rt) or rd.synonym + local sortkey = rt and rt ~= "" and rt or rd.synonym r.split = splitter(strip(sortkey)) end end @@ -140,13 +145,17 @@ function synonyms.finalize(data,options) local split = { } for k=1,#result do local v = result[k] - local entry, tag = sorters.firstofsplit(v) + local entry, tag = firstofsplit(v) local s = split[entry] -- keeps track of change + local d if not s then - s = { tag = tag, data = { } } + d = { } + s = { tag = tag, data = d } split[entry] = s + else + d = s.data end - s.data[#s.data+1] = v + d[#d+1] = v end data.result = split end @@ -154,24 +163,21 @@ end -- for now, maybe at some point we will do a multipass or so -- maybe pass the settings differently +local ctx_synonymentry = context.synonymentry + function synonyms.flush(data,options) local kind = data.metadata.kind -- hack, will be done better - -- context[format("\\start%soutput",kind)]() local result = data.result local sorted = table.sortedkeys(result) for k=1,#sorted do local letter = sorted[k] local sublist = result[letter] local data = sublist.data - -- context[format("\\start%ssection",kind)](sublist.tag) for d=1,#data do local entry = data[d].definition - -- context[format("\\%sentry",kind)](d,entry.tag,entry.synonym,entry.meaning or "") - context("\\%sentry{%s}{%s}{%s}{%s}",kind,d,entry.tag,entry.synonym,entry.meaning or "") + ctx_synonymentry(d,entry.tag,entry.synonym,entry.meaning or "") end - -- context[format("\\stop%ssection",kind)]() end - -- context[format("\\stop%soutput",kind)]() data.result = nil data.metadata.sorted = false end @@ -196,3 +202,8 @@ function synonyms.process(class,options) end end +commands.registersynonym = synonyms.register +commands.registerusedsynonym = synonyms.registerused +commands.synonymmeaning = synonyms.meaning +commands.synonymname = synonyms.synonym +commands.processsynonyms = synonyms.process diff --git a/tex/context/base/strc-syn.mkiv b/tex/context/base/strc-syn.mkiv index e0087d450..73aca18e6 100644 --- a/tex/context/base/strc-syn.mkiv +++ b/tex/context/base/strc-syn.mkiv @@ -20,20 +20,6 @@ \unprotect -\ifdefined\dotagsynonym \else \let\dotagsynonym\relax \fi -\ifdefined\dotagsorting \else \let\dotagsorting\relax \fi - -% general help, can be shared - -% simplifiedcommands -> flag in lua -% -% expansion -% criterium -> when start, then flag in list -% command-> wanneer? -% state -> flagging enabled -% conversion ? -% todo: register xml mode etc - % split but common in lua \def\preprocessexpansion#1#2#3#4% @@ -51,13 +37,93 @@ \globallet#3\s!tex \fi} -\installcorenamespace{synonym} +%D We now use a simple list variant: + +\installcorenamespace {simplelist} + +\installcommandhandler \??simplelist {simplelist} \??simplelist + +\let\setupsimplelists\setupsimplelist + +\setupsimplelists[% + %c!title=, + %c!text=, + % + %c!style=, + %c!color=, + %c!command=, + %c!align=, + % + %c!headstyle=, + %c!headcolor=, + %c!headalign=, + % + %c!titlestyle=, + %c!titlecolor=, + %c!titlecommand=, + %c!titleleft=, + %c!titleright=, + % + %c!closesymbol=, + %c!closecommand=, + % + \c!alternative=\v!left, + \c!display=\v!yes, + \c!width=7\emwidth, + \c!distance=\emwidth, + \c!titledistance=.5\emwidth, + %c!hang=, + %c!sample=, + \c!margin=\v!no, + \c!before=\blank, + \c!inbetween=\blank, + \c!after=\blank, + %c!indentnext=, + %c!indenting=, + % + \c!expansion=\v!no, + %c!xmlsetup=, + %s!catcodes=, + \s!language=\currentmainlanguage, +] + +\appendtoks + \setfalse\c_strc_constructions_define_commands + \ifx\currentsimplelistparent\empty + \defineconstruction[\currentsimplelist][\s!handler=\v!simplelist,\c!level=1]% + \else + \defineconstruction[\currentsimplelist][\currentsimplelistparent][\s!handler=\v!simplelist,\c!level=1]% + \fi + \settrue\c_strc_constructions_define_commands +\to \everydefinesimplelist + +\setuvalue{\??constructioninitializer\v!simplelist}% + {\let\currentsimplelist \currentconstruction + \let\constructionparameter \simplelistparameter + \let\detokenizedconstructionparameter\detokenizedsimplelistparameter + \let\letconstructionparameter \letsimplelistparameter + \let\useconstructionstyleandcolor \usesimpleliststyleandcolor + \let\setupcurrentconstruction \setupcurrentsimplelist} + +\setuvalue{\??constructionfinalizer\v!simplelist}% + {} + +\setuvalue{\??constructiontexthandler\v!simplelist}% + {\begingroup + \useconstructionstyleandcolor\c!headstyle\c!headcolor + \the\everyconstruction + \constructionparameter\c!headcommand + {\strut + \currentsimplelistentry}% + \endgroup} -\installsimplecommandhandler \??synonym {synonym} \??synonym +% And we build on top of this. -\let\setupsynonyms\setupsynonym +\ifdefined\dotagsynonym \else \let\dotagsynonym\relax \fi +\ifdefined\dotagsorting \else \let\dotagsorting\relax \fi -\setupsynonyms +\definesimplelist + [\v!synonym] [\c!state=\v!start, %\c!synonymstyle=, %\c!textstyle=, @@ -75,50 +141,62 @@ %\c!after=, \c!indentnext=\v!no, %\c!expansion=, - \c!method=, - \s!language=\currentmainlanguage] + \c!method=] + +\let\setupsynonyms\setupsimplelist \unexpanded\def\definesynonyms - {\doquadrupleempty\dodefinesynonyms} + {\doquadrupleempty\strc_synonyms_define} -\def\dodefinesynonyms[#1][#2][#3][#4]% name plural \meaning \use +\def\strc_synonyms_define[#1][#2][#3][#4]% name plural \meaning \use {\edef\currentsynonym{#1}% \iffourthargument - \unexpanded\def#4##1{\doinsertsynonym{#1}{##1}}% name tag + \unexpanded\def#4##1{\strc_synonyms_insert{#1}{##1}}% name tag \ifthirdargument - \unexpanded\def#3##1{\doinsertsynonymmeaning{#1}{##1}}% \meaning + \unexpanded\def#3##1{\strc_synonyms_insert_meaning{#1}{##1}}% \meaning \fi \setuvalue{#1}{\definesynonym[\v!no][#1]}% \name \else \ifthirdargument - \unexpanded\def#3##1{\doinsertsynonymmeaning{#1}{##1}}% \meaning + \unexpanded\def#3##1{\strc_synonyms_insert_meaning{#1}{##1}}% \meaning \fi \setuvalue{#1}{\definesynonym[\v!yes][#1]}% \name \fi - \checksynonymparent - \setupcurrentsynonym[\s!single={#1},\s!multi={#2}]% + % +% \checksynonymparent +% \setupcurrentsynonym[\s!single={#1},\s!multi={#2}]% + \setfalse\c_strc_constructions_define_commands + \definesimplelist + [\currentsynonym]% + [\v!sorting] + [\s!single={#1},% + \s!multi={#2}]% + \settrue\c_strc_constructions_define_commands + % \presetheadtext[#2=\Word{#2}]% changes the \if...argument + % \setvalue{\e!setup #2\e!endsetup}{\setupsynonym[#1]}% obsolete definition \setvalue{\e!place \e!listof#2}{\placelistofsynonyms[#1]}% accepts extra argument \setvalue{\e!complete\e!listof#2}{\completelistofsynonyms[#1]}} \unexpanded\def\definesynonym - {\dotripleempty\dodefinesynonym} + {\dotripleempty\strc_synonyms_define_entry} -\def\dodefinesynonym[#1][#2][#3]#4#5% +\def\strc_synonyms_define_entry[#1][#2][#3]#4#5% {\begingroup \edef\currentsynonym{#2}% \edef\currentsynonymtag{#3}% + \let\currentsimplelist\currentsimplelist \ifx\currentsynonymtag\empty \edef\currentsynonymtag{#4}% \fi \ifx\currentsynonymtag\empty % todo: error message \else - \edef\currentsynonymexpansion{\synonymparameter\c!expansion}% + \edef\currentsynonymexpansion{\simplelistparameter\c!expansion}% \preprocessexpansion\currentsynonymexpansion\currentsynonymtext \currentsynonymcoding{#4}% \preprocessexpansion\currentsynonymexpansion\currentsynonymmeaning\currentsynonymcoding{#5}% - \ctxlua{structures.synonyms.register("\currentsynonym", "synonym", { + \ctxcommand{registersynonym("\currentsynonym", "synonym", { metadata = { catcodes = \the\catcodetable, coding = "\currentsynonymcoding", @@ -131,91 +209,77 @@ used = false, } })}% - \doif{#1}\v!yes{\setuxvalue\currentsynonymtag{\noexpand\doinsertsynonym{\currentsynonym}{\currentsynonymtag}}}% + \doif{#1}\v!yes{\setuxvalue\currentsynonymtag{\strc_synonyms_insert{\currentsynonym}{\currentsynonymtag}}}% \fi \endgroup} \unexpanded\def\registersynonym - {\dodoubleargument\doregistersynonym} + {\dodoubleargument\strc_synonyms_register} -\def\doregistersynonym[#1][#2]% - {\ctxlua{structures.synonyms.registerused("#1","#2")}} +\def\strc_synonyms_register[#1][#2]% + {\ctxcommand{registerusedsynonym("#1","#2")}} -\unexpanded\def\doinsertsynonymmeaning#1#2% name tag +\unexpanded\def\strc_synonyms_insert_meaning#1#2% name tag {\begingroup - \def\currentsynonym{#1}% - \usesynonymstyleandcolor\c!textstyle\c!textcolor - \synonymparameter\c!textcommand{\ctxlua{structures.synonyms.meaning("#1","#2")}}% + \def\currentsimplelist{#1}% + \usesimpleliststyleandcolor\c!textstyle\c!textcolor + \simplelistparameter\c!textcommand{\ctxcommand{synonymmeaning("#1","#2")}}% \endgroup} -\unexpanded\def\doinsertsynonym#1#2% name tag +\unexpanded\def\strc_synonyms_insert#1#2% name tag {\begingroup - \def\currentsynonym{#1}% + \edef\currentsimplelist{#1}% + \let\currentsynonym\currentsimplelist % for a while \def\currentsynonymtag{#2}% \dostarttagged\t!synonym\currentsynonym \dotagsynonym - \usesynonymstyleandcolor\c!synonymstyle\c!synonymcolor - \synonymparameter\c!synonymcommand{\ctxlua{structures.synonyms.synonym("#1","#2")}}% + \usesimpleliststyleandcolor\c!synonymstyle\c!synonymcolor + \simplelistparameter\c!synonymcommand{\ctxcommand{synonymname("#1","#2")}}% \dostoptagged - \normalexpanded{\endgroup\synonymparameter\c!next}} + \normalexpanded{\endgroup\simplelistparameter\c!next}} \unexpanded\def\placelistofsynonyms - {\dodoubleempty\doplacelistofsynonyms} + {\dodoubleempty\strc_synonyms_place_list} -\def\doplacelistofsynonyms[#1][#2]% +\def\strc_synonyms_place_list[#1][#2]% {\begingroup - \def\currentsynonym{#1}% - \definedescription % todo, per class - [syndef] - [\c!location=\synonymparameter\c!location, - \c!width=\synonymparameter\c!width, - \c!distance=\synonymparameter\c!distance, - \c!sample=\synonymparameter\c!sample, - \c!hang=\synonymparameter\c!hang, - \c!align=\synonymparameter\c!align, - \c!before=\synonymparameter\c!before, - \c!inbetween=\synonymparameter\c!inbetween, - \c!after=\synonymparameter\c!after, - \c!indentnext=\synonymparameter\c!indentnext, - \c!headstyle=\synonymparameter\c!textstyle, - \c!headcolor=\synonymparameter\c!textcolor, - \c!style=, - \c!color=. - #2]% + \edef\currentsimplelist{#1}% + \strc_constructions_initialize{#1}% + \setupcurrentsimplelist[#2]% + \let\synonymentry\strc_synonym_normal \startpacked - \ctxlua{structures.synonyms.process('#1',{ - criterium = "\synonymparameter\c!criterium", - language = "\synonymparameter\s!language", - method = "\synonymparameter\c!method", + \ctxcommand{processsynonyms('#1',{ + criterium = "\simplelistparameter\c!criterium", + language = "\simplelistparameter\s!language", + method = "\simplelistparameter\c!method", })}% \stoppacked \endgroup} \def\completelistofsynonyms - {\dodoubleempty\docompletelistofsynonyms} + {\dodoubleempty\strc_synonyms_complete_list} -\def\docompletelistofsynonyms[#1][#2]% - {\edef\currentsynonym{#1}% - \normalexpanded{\startnamedsection[\v!chapter][\c!title={\headtext{\synonymparameter\s!multi}},\c!reference=#1]}% - \doplacelistofsynonyms[#1][#2]% +\def\strc_synonyms_complete_list[#1][#2]% + {\begingroup + \edef\currentsimplelist{#1}% + \normalexpanded{\startnamedsection[\v!chapter][\c!title={\headtext{\simplelistparameter\s!multi}},\c!reference=#1]}% + \strc_synonyms_place_list[#1][#2]% \page - \stopnamedsection} - -\let\startsynonymoutput \relax -\let\stopsynonymoutput \relax -\let\startsynonymsection\gobbleoneargument -\let\stopsynonymsection \relax + \stopnamedsection + \endgroup} -\unexpanded\def\synonymentry#1#2#3#4% - {\syndef{#3}#4\par} +\unexpanded\def\strc_synonym_normal#1#2#3#4% + {\begingroup + \def\currentsimplelistentry{#3}% + \csname\??constructionstarthandler\v!construction\endcsname + #4% + \csname\??constructionstophandler\v!construction\endcsname + \endgroup} %D Sorting (a simplified version of synonym). -\installcorenamespace{sorting} - -\installsimplecommandhandler \??sorting {sorting} \??sorting - -\setupsorting +\definesimplelist + [\v!sorting] [\c!state=\v!start, %\c!command=, % we test for defined ! %\c!criterium=, @@ -223,48 +287,57 @@ %\c!before=, \c!after=\endgraf, %\c!expansion=, - \c!method=, - \s!language=\currentmainlanguage] + \c!method=] + +\let\setupsorting\setupsimplelist \unexpanded\def\definesorting - {\dotripleempty\dodefinesorting} + {\dotripleempty\strc_sorting_define} % if #3=\relax or \v!none, then no command but still protected -\def\dodefinesorting[#1][#2][#3]% +\def\strc_sorting_define[#1][#2][#3]% {\edef\currentsorting{#1}% \ifthirdargument \doifnot{#3}\v!none {\ifx#3\relax \else - \unexpanded\def#3##1{\doinsertsort{#1}{##1}}% + \unexpanded\def#3##1{\strc_sorting_insert{#1}{##1}}% \fi}% \setuvalue{#1}{\definesort[\v!no][#1]}% \else \setuvalue{#1}{\definesort[\v!yes][#1]}% \fi - \checksortingparent - \setupcurrentsorting[\s!multi={#2}]% + \setfalse\c_strc_constructions_define_commands + \definesimplelist + [\currentsorting]% + [\v!sorting] + [\s!single={#1},% + \s!multi={#2}]% + \settrue\c_strc_constructions_define_commands + % \presetheadtext[#2=\Word{#2}]% after \ifthirdargument -) + % \setvalue{\e!setup #2\e!endsetup}{\setupsorting[#1]}% obsolete definition \setvalue{\e!place \e!listof#2}{\placelistofsorts[#1]}% \setvalue{\e!complete\e!listof#2}{\completelistofsorts[#1]}} \unexpanded\def\definesort - {\dotripleempty\dodefinesort} + {\dotripleempty\strc_sorting_define_entry} -\def\dodefinesort[#1][#2][#3]#4% +\def\strc_sorting_define_entry[#1][#2][#3]#4% {\begingroup \edef\currentsorting{#2}% \edef\currentsortingtag{#3}% + \let\currentsimplelist\currentsimplelist \ifx\currentsortingtag\empty \edef\currentsortingtag{#4}% \fi \ifx\currentsortingtag\empty % todo: error message \else - \edef\currentsortingexpansion{\sortingparameter\c!expansion}% + \edef\currentsortingexpansion{\simplelistparameter\c!expansion}% \preprocessexpansion\currentsortingexpansion\currentsortingtext\currentsortingcoding{#4}% - \ctxlua{structures.synonyms.register("\currentsorting", "sorting", { + \ctxcommand{registersynonym("\currentsorting", "sorting", { metadata = { catcodes = \the\catcodetable, coding = "\currentsortingcoding", @@ -276,67 +349,77 @@ % used = false, } })}% - \doif{#1}\v!yes{\setuxvalue\currentsortingtag{\noexpand\doinsertsort{\currentsorting}{\currentsortingtag}}}% + \doif{#1}\v!yes{\setuxvalue\currentsortingtag{\strc_sorting_insert{\currentsorting}{\currentsortingtag}}}% \fi \endgroup} -\unexpanded\def\doinsertsort#1#2% name tag +\unexpanded\def\strc_sorting_insert#1#2% name tag {\begingroup % no kap currently, of .. we need to map cap onto WORD \edef\currentsorting{#1}% \def\currentsortingtag{#2}% + \let\currentsimplelist\currentsorting \dostarttagged\t!sorting\currentsorting \dotagsorting - \usesortingstyleandcolor\c!style\c!color - \ctxlua{structures.synonyms.synonym("#1","#2")}% + \usesimpleliststyleandcolor\c!style\c!color + \ctxcommand{synonymname("#1","#2")}% \dostoptagged - \normalexpanded{\endgroup\sortingparameter\c!next}} + \normalexpanded{\endgroup\simplelistparameter\c!next}} \unexpanded\def\registersort - {\dodoubleargument\doregistersort} + {\dodoubleargument\strc_sorting_register} -\def\doregistersort[#1][#2]% - {\ctxlua{structures.synonyms.registerused("#1","#2")}} +\def\strc_sorting_register[#1][#2]% + {\ctxcommand{registerusedsynonym("#1","#2")}} % before after % % maybe just 'commandset' and then combine \unexpanded\def\placelistofsorts - {\dodoubleempty\doplacelistofsorts} + {\dodoubleempty\strc_sorting_place_list} -\def\doplacelistofsorts[#1][#2]% NOG EEN RUWE VERSIE MAKEN ZONDER WITRUIMTE ETC ETC +\def\strc_sorting_place_list[#1][#2]% {\begingroup - \def\currentsorting{#1}% - \setupcurrentsorting[#2]% + \edef\currentsimplelist{#1}% + \strc_constructions_initialize{#1}% + \setupcurrentsimplelist[#2]% + \edef\p_simplelist_command{\simplelistparameter\c!command}% + \ifx\p_simplelist_command\empty + \let\synonymentry\strc_sorting_normal + \else + \let\synonymentry\strc_sorting_command + \fi \startpacked - \ctxlua{structures.synonyms.process('#1',{ - criterium = "\sortingparameter\c!criterium", - language = "\sortingparameter\s!language", - method = "\sortingparameter\c!method", + \ctxcommand{processsynonyms('#1',{ + criterium = "\simplelistparameter\c!criterium", + language = "\simplelistparameter\s!language", + method = "\simplelistparameter\c!method", })}% \stoppacked \endgroup} \unexpanded\def\completelistofsorts - {\dodoubleempty\docompletelistofsorts} + {\dodoubleempty\strc_sorting_complete_list} -\def\docompletelistofsorts[#1][#2]% - {\edef\currentsorting{#1}% - \normalexpanded{\startnamedsection[\v!chapter][\c!title={\headtext{\sortingparameter\s!multi}},\c!reference=#1]}% - \doplacelistofsorts[#1][#2]% +\def\strc_sorting_complete_list[#1][#2]% + {\begingroup + \edef\currentsimplelist{#1}% + \normalexpanded{\startnamedsection[\v!chapter][\c!title={\headtext{\simplelistparameter\s!multi}},\c!reference=#1]}% + \strc_sorting_place_list[#1][#2]% \page - \stopnamedsection} + \stopnamedsection + \endgroup} -\let\startsortingoutput \relax -\let\stopsortingoutput \relax -\let\startsortingsection\gobbleoneargument -\let\stopsortingsection \relax +\def\strc_sorting_command#1#2#3#4% #4 is meaning but empty here + {\p_simplelist_command{#1}{#2}{#3}} -\def\sortingentry#1#2#3#4% #4 is meaning but empty here - {\doifelsenothing{\sortingparameter\c!command} - {\begingroup\usesortingstyleandcolor\c!style\c!color#3\endgroup\par} % todo - {\sortingparameter\c!command{#1}{#2}{#3}}} +\def\strc_sorting_normal#1#2#3#4% #4 is meaning but empty here + {\begingroup + \usesimpleliststyleandcolor\c!style\c!color + #3% + \endgroup + \par} %D Presets. diff --git a/tex/context/base/syst-aux.lua b/tex/context/base/syst-aux.lua index 6b5e18d16..de15428f9 100644 --- a/tex/context/base/syst-aux.lua +++ b/tex/context/base/syst-aux.lua @@ -78,7 +78,7 @@ end -- end -- end -local pattern = (C((1-P("%"))^1) * Carg(1)) /function(n,d) return format("%.0fsp",d * tonumber(n)/100) end * P("%") * P(-1) +local pattern = (C((1-P("%"))^1) * Carg(1)) /function(n,d) return format("%.0fsp",d * tonumber(n)/100) end * P("%") * P(-1) -- .0 ? -- commands.percentageof("10%",65536*10) @@ -109,7 +109,8 @@ function commands.thetexdefinition(str) context(lpegmatch(pattern,str)) end -local upper, lower = utf.upper, utf.lower +local upper, lower, strip = utf.upper, utf.lower, string.strip function commands.upper(s) context(upper(s)) end function commands.lower(s) context(lower(s)) end +function commands.strip(s) context(strip(s)) end diff --git a/tex/context/base/syst-aux.mkiv b/tex/context/base/syst-aux.mkiv index c7be461a3..308e4b6fc 100644 --- a/tex/context/base/syst-aux.mkiv +++ b/tex/context/base/syst-aux.mkiv @@ -128,10 +128,16 @@ \newif\if!!doned \newif\if!!donee \newif\if!!donef \def\!!zerocount {0} % alongside \zerocount -\def\!!minusone {-1} % alongside \minusone -\def\!!plusone {1} % alongside \plusone -\def\!!plustwo {2} % alongside \plustwo -\def\!!plusthree {3} % alongside \plusthree +\def\!!minusone {-1} % ... +\def\!!plusone {1} % ... +\def\!!plustwo {2} % ... +\def\!!plusthree {3} % ... +\def\!!plusfour {4} % ... +\def\!!plusfive {5} % ... +\def\!!plussix {6} % ... +\def\!!plusseven {7} % ... +\def\!!pluseight {8} % ... +\def\!!plusnine {9} % alongside \plusnine \setnewconstant \uprotationangle 0 \setnewconstant\rightrotationangle 90 @@ -346,6 +352,12 @@ \let\if_next_blank_space_token\iffalse \futurelet\nexttoken\syst_helpers_inspect_next_bgroup_character} +\unexpanded\def\doifnextbgroupcselse#1#2% + {\let\m_syst_action_yes#1% + \let\m_syst_action_nop#2% + \let\if_next_blank_space_token\iffalse + \futurelet\nexttoken\syst_helpers_inspect_next_bgroup_character} + \def\syst_helpers_inspect_next_bgroup_character {\ifx\nexttoken\blankspace \expandafter\syst_helpers_reinspect_next_bgroup_character diff --git a/tex/context/base/syst-con.lua b/tex/context/base/syst-con.lua index 48f02da3a..dfbd49051 100644 --- a/tex/context/base/syst-con.lua +++ b/tex/context/base/syst-con.lua @@ -6,29 +6,39 @@ if not modules then modules = { } end modules ['syst-con'] = { license = "see context related readme files" } -converters = converters or { } +local tonumber = tonumber +local utfchar = utf.char +local gsub, format = string.gsub, string.format + +converters = converters or { } +local converters = converters + +local context = context +local comands = commands + +local formatters = string,formatters --[[ldx-- <p>For raw 8 bit characters, the offset is 0x110000 (bottom of plane 18) at the top of <l n='luatex'/>'s char range but outside the unicode range.</p> --ldx]]-- -local tonumber = tonumber -local utfchar = utf.char -local gsub, format = string.gsub, string.format +function converters.hexstringtonumber(n) tonumber(n,16) end +function converters.octstringtonumber(n) tonumber(n, 8) end -function converters.hexstringtonumber(n) tonumber(n,16) end -function converters.octstringtonumber(n) tonumber(n, 8) end function converters.rawcharacter (n) utfchar(0x110000+n) end -function converters.lchexnumber (n) format("%x" ,n) end -function converters.uchexnumber (n) format("%X" ,n) end -function converters.lchexnumbers (n) format("%02x",n) end -function converters.uchexnumbers (n) format("%02X",n) end -function converters.octnumber (n) format("%03o",n) end + +converters.lchexnumber = formatters["%x" ] +converters.uchexnumber = formatters["%X" ] +converters.lchexnumbers = formatters["%02x"] +converters.uchexnumbers = formatters["%02X"] +converters.octnumber = formatters["%03o"] function commands.hexstringtonumber(n) context(tonumber(n,16)) end function commands.octstringtonumber(n) context(tonumber(n, 8)) end + function commands.rawcharacter (n) context(utfchar(0x110000+n)) end + function commands.lchexnumber (n) context("%x" ,n) end function commands.uchexnumber (n) context("%X" ,n) end function commands.lchexnumbers (n) context("%02x",n) end @@ -53,10 +63,10 @@ local cos, sin, tan = math.cos, math.sin, math.tan -- function commands.cos (n) context(cos (n)) end -- function commands.tan (n) context(tan (n)) end -function commands.sind(n) context("%0.6f",sind(n)) end -function commands.cosd(n) context("%0.6f",cosd(n)) end -function commands.tand(n) context("%0.6f",tand(n)) end +function commands.sind(n) context("%0.6F",sind(n)) end +function commands.cosd(n) context("%0.6F",cosd(n)) end +function commands.tand(n) context("%0.6F",tand(n)) end -function commands.sin (n) context("%0.6f",sin (n)) end -function commands.cos (n) context("%0.6f",cos (n)) end -function commands.tan (n) context("%0.6f",tan (n)) end +function commands.sin (n) context("%0.6F",sin (n)) end +function commands.cos (n) context("%0.6F",cos (n)) end +function commands.tan (n) context("%0.6F",tan (n)) end diff --git a/tex/context/base/syst-ini.mkiv b/tex/context/base/syst-ini.mkiv index 38c34556a..fda873d3c 100644 --- a/tex/context/base/syst-ini.mkiv +++ b/tex/context/base/syst-ini.mkiv @@ -301,7 +301,7 @@ %D 128-1023 are private and should not be touched. \let\attributeunsetvalue\c_syst_min_counter_value % used to be \minusone -\normalprotected\def\newattribute{\syst_basics_allocate\c_syst_min_allocated_attribute\attribute\attributedef\c_syst_max_allocated_register} +\normalprotected\def\newattribute{\syst_basics_allocate\c_syst_last_allocated_attribute\attribute\attributedef\c_syst_max_allocated_register} %D Not used by \CONTEXT\ but for instance \PICTEX\ needs it. It's a trick to force %D strings instead of tokens that take more memory. It's a trick to trick to force diff --git a/tex/context/base/syst-lua.lua b/tex/context/base/syst-lua.lua index e47041444..cd7dcc062 100644 --- a/tex/context/base/syst-lua.lua +++ b/tex/context/base/syst-lua.lua @@ -17,37 +17,37 @@ local context = context function commands.writestatus(...) logs.status(...) end -- overloaded later -local firstoftwoarguments = context.firstoftwoarguments -- context.constructcsonly("firstoftwoarguments" ) -local secondoftwoarguments = context.secondoftwoarguments -- context.constructcsonly("secondoftwoarguments") -local firstofoneargument = context.firstofoneargument -- context.constructcsonly("firstofoneargument" ) -local gobbleoneargument = context.gobbleoneargument -- context.constructcsonly("gobbleoneargument" ) +local ctx_firstoftwoarguments = context.firstoftwoarguments -- context.constructcsonly("firstoftwoarguments" ) +local ctx_secondoftwoarguments = context.secondoftwoarguments -- context.constructcsonly("secondoftwoarguments") +local ctx_firstofoneargument = context.firstofoneargument -- context.constructcsonly("firstofoneargument" ) +local ctx_gobbleoneargument = context.gobbleoneargument -- context.constructcsonly("gobbleoneargument" ) --- contextsprint(prtcatcodes,[[\ui_fo]]) -- firstofonearguments --- contextsprint(prtcatcodes,[[\ui_go]]) -- gobbleonearguments --- contextsprint(prtcatcodes,[[\ui_ft]]) -- firstoftwoarguments --- contextsprint(prtcatcodes,[[\ui_st]]) -- secondoftwoarguments +-- contextsprint(prtcatcodes,[[\ui_fo]]) -- ctx_firstofonearguments +-- contextsprint(prtcatcodes,[[\ui_go]]) -- ctx_gobbleonearguments +-- contextsprint(prtcatcodes,[[\ui_ft]]) -- ctx_firstoftwoarguments +-- contextsprint(prtcatcodes,[[\ui_st]]) -- ctx_secondoftwoarguments function commands.doifelse(b) if b then - firstoftwoarguments() + ctx_firstoftwoarguments() else - secondoftwoarguments() + ctx_secondoftwoarguments() end end function commands.doif(b) if b then - firstofoneargument() + ctx_firstofoneargument() else - gobbleoneargument() + ctx_gobbleoneargument() end end function commands.doifnot(b) if b then - gobbleoneargument() + ctx_gobbleoneargument() else - firstofoneargument() + ctx_firstofoneargument() end end @@ -59,9 +59,9 @@ end function commands.doifelsespaces(str) if find(str,"^ +$") then - firstoftwoarguments() + ctx_firstoftwoarguments() else - secondoftwoarguments() + ctx_secondoftwoarguments() end end @@ -84,12 +84,12 @@ function commands.doifcommonelse(a,b) -- often the same test for i=1,na do for j=1,nb do if ha[i] == hb[j] then - firstoftwoarguments() + ctx_firstoftwoarguments() return end end end - secondoftwoarguments() + ctx_secondoftwoarguments() end function commands.doifinsetelse(a,b) @@ -97,20 +97,20 @@ function commands.doifinsetelse(a,b) if not hb then hb = lpegmatch(s,b) h[b] = hb end for i=1,#hb do if a == hb[i] then - firstoftwoarguments() + ctx_firstoftwoarguments() return end end - secondoftwoarguments() + ctx_secondoftwoarguments() end local pattern = lpeg.patterns.validdimen function commands.doifdimenstringelse(str) if lpegmatch(pattern,str) then - firstoftwoarguments() + ctx_firstoftwoarguments() else - secondoftwoarguments() + ctx_secondoftwoarguments() end end diff --git a/tex/context/base/tabl-ntb.mkiv b/tex/context/base/tabl-ntb.mkiv index 42c61f16c..3734e5647 100644 --- a/tex/context/base/tabl-ntb.mkiv +++ b/tex/context/base/tabl-ntb.mkiv @@ -1500,7 +1500,6 @@ \fi \fi} - \def\tabl_ntb_check_heights_one {\dorecurse\c_tabl_ntb_maximum_row {\c_tabl_ntb_current_row_three\recurselevel\relax diff --git a/tex/context/base/tabl-tbl.lua b/tex/context/base/tabl-tbl.lua index 21564a472..b088a1008 100644 --- a/tex/context/base/tabl-tbl.lua +++ b/tex/context/base/tabl-tbl.lua @@ -9,21 +9,25 @@ if not modules then modules = { } end modules ['tabl-tbl'] = { -- A couple of hacks ... easier to do in Lua than in regular TeX. More will -- follow. -local context, commands = context, commands - local tonumber = tonumber local gsub, rep, sub, find = string.gsub, string.rep, string.sub, string.find local P, C, Cc, Ct, lpegmatch = lpeg.P, lpeg.C, lpeg.Cc, lpeg.Ct, lpeg.match +local context = context +local commands = commands + local texsetcount = tex.setcount -local separator = P("|") -local nested = lpeg.patterns.nested -local pattern = Ct((separator * (C(nested) + Cc("")) * C((1-separator)^0))^0) +local separator = P("|") +local nested = lpeg.patterns.nested +local pattern = Ct((separator * (C(nested) + Cc("")) * C((1-separator)^0))^0) + +local ctx_settabulatelastentry = context.settabulatelastentry +local ctx_settabulateentry = context.settabulateentry function commands.presettabulate(preamble) preamble = gsub(preamble,"~","d") -- let's get rid of ~ mess here - if find(preamble,"%*") then + if find(preamble,"*",1,true) then -- todo: lpeg but not now preamble = gsub(preamble, "%*(%b{})(%b{})", function(n,p) return rep(sub(p,2,-2),tonumber(sub(n,2,-2)) or 1) @@ -35,7 +39,7 @@ function commands.presettabulate(preamble) texsetcount("global","c_tabl_tabulate_has_rule_spec_first", t[1] == "" and 0 or 1) texsetcount("global","c_tabl_tabulate_has_rule_spec_last", t[m+1] == "" and 0 or 1) for i=1,m,2 do - context.settabulateentry(t[i],t[i+1]) + ctx_settabulateentry(t[i],t[i+1]) end - context.settabulatelastentry(t[m+1]) + ctx_settabulatelastentry(t[m+1]) end diff --git a/tex/context/base/tabl-tbl.mkiv b/tex/context/base/tabl-tbl.mkiv index 82d1be893..1aeaa2e56 100644 --- a/tex/context/base/tabl-tbl.mkiv +++ b/tex/context/base/tabl-tbl.mkiv @@ -1076,8 +1076,8 @@ \tabulatenoalign{\kern-\lineheight}% \fi} -\setuvalue{\e!start\v!tabulatehead}{\doifnextoptionalelse\tabl_tabulate_start_head_yes\tabl_tabulate_start_head_nop} -\setuvalue{\e!start\v!tabulatetail}{\doifnextoptionalelse\tabl_tabulate_start_foot_yes\tabl_tabulate_start_foot_nop} +\setuvalue{\e!start\v!tabulatehead}{\doifnextoptionalcselse\tabl_tabulate_start_head_yes\tabl_tabulate_start_head_nop} +\setuvalue{\e!start\v!tabulatetail}{\doifnextoptionalcselse\tabl_tabulate_start_foot_yes\tabl_tabulate_start_foot_nop} \let\m_tabl_tabulate_data\empty @@ -1097,7 +1097,7 @@ % {\bgroup % \edef\currenttabulationparent{#1}% % \let\currenttabulation\currenttabulationparent -% \doifnextoptionalelse\tabl_start_defined_yes\tabl_start_defined_nop} +% \doifnextoptionalcselse\tabl_start_defined_yes\tabl_start_defined_nop} % % \def\tabl_start_defined_yes[#1]% % {\edef\currenttabulation{\currenttabulation:#1}% diff --git a/tex/context/base/tabl-xtb.lua b/tex/context/base/tabl-xtb.lua index 653eb6e08..d9daefe69 100644 --- a/tex/context/base/tabl-xtb.lua +++ b/tex/context/base/tabl-xtb.lua @@ -41,7 +41,6 @@ local format = string.format local concat = table.concat local points = number.points -local context = context local context_beginvbox = context.beginvbox local context_endvbox = context.endvbox local context_blank = context.blank @@ -573,8 +572,8 @@ function xtables.reflow_height() local total = totalheight + totaldepth local leftover = settings.textheight - total if leftover > 0 then - local leftheight = (totalheight / total ) * leftover / #heights - local leftdepth = (totaldepth / total ) * leftover / #depths + local leftheight = (totalheight / total) * leftover / #heights + local leftdepth = (totaldepth / total) * leftover / #depths for i=1,nofrows do heights[i] = heights[i] + leftheight depths [i] = depths [i] + leftdepth diff --git a/tex/context/base/tabl-xtb.mkvi b/tex/context/base/tabl-xtb.mkvi index 556bec5ce..cca56dbee 100644 --- a/tex/context/base/tabl-xtb.mkvi +++ b/tex/context/base/tabl-xtb.mkvi @@ -402,7 +402,7 @@ \unexpanded\def\startxrow {\begingroup - \doifnextoptionalelse\tabl_x_start_row_yes\tabl_x_start_row_nop} + \doifnextoptionalcselse\tabl_x_start_row_yes\tabl_x_start_row_nop} \unexpanded\def\tabl_x_start_row_reflow_width_yes[#settings]% {\setupcurrentxtable[#settings]% @@ -435,7 +435,7 @@ \endgroup} \unexpanded\def\startxcell - {\doifnextoptionalelse\tabl_x_start_cell_yes\tabl_x_start_cell_nop} + {\doifnextoptionalcselse\tabl_x_start_cell_yes\tabl_x_start_cell_nop} \unexpanded\def\stopxcell {\tabl_x_stop_cell} @@ -677,7 +677,7 @@ \unexpanded\def\startxgroup {\begingroup - \doifnextoptionalelse\tabl_x_start_group_delayed_one\relax} + \doifnextoptionalcselse\tabl_x_start_group_delayed_one\relax} \unexpanded\def\stopxgroup {\endgroup} @@ -695,7 +695,7 @@ \chaintocurrentxtable{#tag}% \fi \edef\currentxtable{#tag}% - \doifnextoptionalelse\setupcurrentxtable\relax} + \doifnextoptionalcselse\setupcurrentxtable\relax} \let\startxrowgroup \startxgroup \let\stopxrowgroup \stopxgroup @@ -706,7 +706,7 @@ \unexpanded\def\startxcell {\begingroup - \doifnextoptionalelse\tabl_x_start_cell_delayed_one\tabl_x_start_cell_nop} + \doifnextoptionalcselse\tabl_x_start_cell_delayed_one\tabl_x_start_cell_nop} \unexpanded\def\tabl_x_start_cell_delayed_one[#tag]% % {\ifcsname\namedxtablehash{#tag}\s!parent\endcsname @@ -721,7 +721,7 @@ \chaintocurrentxtable{#tag}% \fi \edef\currentxtable{#tag}% - \doifnextoptionalelse\tabl_x_start_cell_yes\tabl_x_start_cell_nop} + \doifnextoptionalcselse\tabl_x_start_cell_yes\tabl_x_start_cell_nop} \unexpanded\def\stopxcell {\tabl_x_stop_cell @@ -731,7 +731,7 @@ \unexpanded\def\startxrow {\begingroup - \doifnextoptionalelse\tabl_x_start_row_delayed_one\tabl_x_start_row_nop} + \doifnextoptionalcselse\tabl_x_start_row_delayed_one\tabl_x_start_row_nop} \unexpanded\def\tabl_x_start_row_delayed_one[#tag]% % {\ifcsname\namedxtablehash{#tag}\s!parent\endcsname @@ -746,7 +746,7 @@ \chaintocurrentxtable{#tag}% \fi \edef\currentxtable{#tag}% - \doifnextoptionalelse\tabl_x_start_row_yes\tabl_x_start_row_nop} + \doifnextoptionalcselse\tabl_x_start_row_yes\tabl_x_start_row_nop} \unexpanded\def\stopxrow {\tabl_x_stop_row diff --git a/tex/context/base/task-ini.lua b/tex/context/base/task-ini.lua index fa9b0cf10..75ce08232 100644 --- a/tex/context/base/task-ini.lua +++ b/tex/context/base/task-ini.lua @@ -18,11 +18,13 @@ if not modules then modules = { } end modules ['task-ini'] = { -- not apply the font handler, we can remove all checks for subtypes 255 local tasks = nodes.tasks +local prependaction = tasks.prependaction local appendaction = tasks.appendaction local disableaction = tasks.disableaction local freezegroup = tasks.freezegroup local freezecallbacks = callbacks.freeze + appendaction("processors", "normalizers", "typesetters.characters.handler") -- always on appendaction("processors", "normalizers", "fonts.collections.process") -- disabled appendaction("processors", "normalizers", "fonts.checkers.missing") -- disabled @@ -120,6 +122,11 @@ appendaction("vboxbuilders", "normalizers", "builders.vspacing.vboxhandler") appendaction("mvlbuilders", "normalizers", "typesetters.checkers.handler") appendaction("vboxbuilders", "normalizers", "typesetters.checkers.handler") +-- rather special (this might get hardcoded): + +prependaction("processors", "before", "nodes.properties.attach") -- enabled but optimized for quick abort +appendaction ("shipouts", "normalizers", "nodes.properties.delayed") -- enabled but optimized for quick abort + -- speedup: only kick in when used disableaction("processors", "languages.replacements.handler") diff --git a/tex/context/base/trac-deb.lua b/tex/context/base/trac-deb.lua index 4cc48c4a5..af4f7c643 100644 --- a/tex/context/base/trac-deb.lua +++ b/tex/context/base/trac-deb.lua @@ -9,27 +9,30 @@ if not modules then modules = { } end modules ['trac-deb'] = { local lpeg, status = lpeg, status local lpegmatch = lpeg.match -local format, concat, match = string.format, table.concat, string.match +local format, concat, match, find = string.format, table.concat, string.match, string.find local tonumber, tostring = tonumber, tostring -- maybe tracers -> tracers.tex (and tracers.lua for current debugger) -local report_system = logs.reporter("system","tex") +----- report_tex = logs.reporter("tex error") +----- report_lua = logs.reporter("lua error") +local report_nl = logs.newline +local report_str = logs.writer -tracers = tracers or { } -local tracers = tracers +tracers = tracers or { } +local tracers = tracers -tracers.lists = { } -local lists = tracers.lists +tracers.lists = { } +local lists = tracers.lists -tracers.strings = { } -local strings = tracers.strings +tracers.strings = { } +local strings = tracers.strings -local texgetdimen = tex.getdimen -local texgettoks = tex.gettoks -local texgetcount = tex.getcount +local texgetdimen = tex.getdimen +local texgettoks = tex.gettoks +local texgetcount = tex.getcount -strings.undefined = "undefined" +strings.undefined = "undefined" lists.scratch = { 0, 2, 4, 6, 8 @@ -96,7 +99,19 @@ function tracers.knownlist(name) return l and #l > 0 end -function tracers.showlines(filename,linenumber,offset,errorstr) +local savedluaerror = nil + +local function errorreporter(luaerror) + if luaerror then + logs.enable("lua error") -- + return logs.reporter("lua error") + else + logs.enable("tex error") + return logs.reporter("tex error") + end +end + +function tracers.showlines(filename,linenumber,offset,luaerrorline) local data = io.loaddata(filename) if not data or data == "" then local hash = url.hashed(filename) @@ -109,35 +124,18 @@ function tracers.showlines(filename,linenumber,offset,errorstr) end local lines = data and string.splitlines(data) if lines and #lines > 0 then - -- This does not work completely as we cannot access the last Lua error using - -- table.print(status.list()). This is on the agenda. Eventually we will - -- have a sequence of checks here (tex, lua, mp) at this end. - -- - -- Actually, in 0.75+ the lua error message is even weirder as you can - -- get: - -- - -- LuaTeX error [string "\directlua "]:3: unexpected symbol near '1' ... - -- - -- <inserted text> \endgroup \directlua { - -- - -- So there is some work to be done in the LuaTeX engine. - -- - local what, where = match(errorstr,[[LuaTeX error <main (%a+) instance>:(%d+)]]) - or match(errorstr,[[LuaTeX error %[string "\\(.-lua) "%]:(%d+)]]) -- buglet - if where then + if luaerrorline and luaerrorline > 0 then -- lua error: linenumber points to last line local start = "\\startluacode" local stop = "\\stopluacode" - local where = tonumber(where) - if lines[linenumber] == start then - local n = linenumber - for i=n,1,-1 do - if lines[i] == start then - local n = i + where - if n <= linenumber then - linenumber = n - end + local n = linenumber + for i=n,1,-1 do + if find(lines[i],start) then + n = i + luaerrorline - 1 + if n <= linenumber then + linenumber = n end + break end end end @@ -159,30 +157,84 @@ function tracers.showlines(filename,linenumber,offset,errorstr) end end -function tracers.printerror(offset) - local inputstack = resolvers.inputstack - local filename = inputstack[#inputstack] or status.filename - local linenumber = tonumber(status.linenumber) or 0 +-- this will work ok in >=0.79 + +-- todo: last tex error has ! prepended +-- todo: some nested errors have two line numbers +-- todo: collect errorcontext in string (after code cleanup) +-- todo: have a separate status.lualinenumber + +-- todo: \starttext bla \blank[foo] bla \stoptext + +local function processerror(offset) + local inputstack = resolvers.inputstack + local filename = inputstack[#inputstack] or status.filename + local linenumber = tonumber(status.linenumber) or 0 + -- + -- print("[[ last tex error: " .. tostring(status.lasterrorstring) .. " ]]") + -- print("[[ last lua error: " .. tostring(status.lastluaerrorstring) .. " ]]") + -- print("[[ start errorcontext ]]") + -- tex.show_context() + -- print("\n[[ stop errorcontext ]]") + -- + local lasttexerror = status.lasterrorstring or "?" + local lastluaerror = status.lastluaerrorstring or lasttexerror + local luaerrorline = match(lastluaerror,[[lua%]?:.-(%d+)]]) or (lastluaerror and find(lastluaerror,"?:0:",1,true) and 0) + local report = errorreporter(luaerrorline) + tracers.printerror { + filename = filename, + linenumber = linenumber, + lasttexerror = lasttexerror, + lastluaerror = lastluaerror, + luaerrorline = luaerrorline, + offset = tonumber(offset) or 10, + } +end + +-- so one can overload the printer if (really) needed + +function tracers.printerror(specification) + local filename = specification.filename + local linenumber = specification.linenumber + local lasttexerror = specification.lasttexerror + local lastluaerror = specification.lastluaerror + local luaerrorline = specification.luaerrorline + local offset = specification.offset + local report = errorreporter(luaerrorline) if not filename then - report_system("error not related to input file: %s ...",status.lasterrorstring) + report("error not related to input file: %s ...",lasttexerror) elseif type(filename) == "number" then - report_system("error on line %s of filehandle %s: %s ...",linenumber,filename,status.lasterrorstring) + report("error on line %s of filehandle %s: %s ...",linenumber,lasttexerror) else - -- currently we still get the error message printed to the log/console so we - -- add a bit of spacing around our variant - texio.write_nl("\n") - local errorstr = status.lasterrorstring or "?" - -- inspect(status.list()) - report_system("error on line %s in file %s: %s ...\n",linenumber,filename,errorstr) -- lua error? - texio.write_nl(tracers.showlines(filename,linenumber,offset,errorstr),"\n") + report_nl() + if luaerrorline then + report("error on line %s in file %s:\n\n%s",linenumber,filename,lastluaerror) +-- report("error on line %s in file %s:\n\n%s",linenumber,filename,lasttexerror) + else + report("error on line %s in file %s: %s",linenumber,filename,lasttexerror) + if tex.show_context then + report_nl() + tex.show_context() + end + end + report_nl() + report_str(tracers.showlines(filename,linenumber,offset,tonumber(luaerrorline))) + report_nl() end end +local nop = function() end + directives.register("system.errorcontext", function(v) + local register = callback.register if v then - callback.register('show_error_hook', function() tracers.printerror(v) end) + register('show_error_message', nop) + register('show_error_hook', function() processerror(v) end) + register('show_lua_error_hook', nop) else - callback.register('show_error_hook', nil) + register('show_error_message', nil) + register('show_error_hook', nil) + register('show_lua_error_hook', nil) end end) diff --git a/tex/context/base/trac-inf.lua b/tex/context/base/trac-inf.lua index 067cff27c..034726ffc 100644 --- a/tex/context/base/trac-inf.lua +++ b/tex/context/base/trac-inf.lua @@ -12,7 +12,7 @@ if not modules then modules = { } end modules ['trac-inf'] = { -- and rawget. local type, tonumber, select = type, tonumber, select -local format, lower = string.format, string.lower +local format, lower, find = string.format, string.lower, string.find local concat = table.concat local clock = os.gettimeofday or os.clock -- should go in environment @@ -123,10 +123,8 @@ function statistics.show() -- this code will move local register = statistics.register register("used platform", function() - local mask = lua.mask or "ascii" - return format("%s, type: %s, binary subtree: %s, symbol mask: %s (%s)", - os.platform or "unknown",os.type or "unknown", environment.texos or "unknown", - mask,mask == "utf" and "τεχ" or "tex") + return format("%s, type: %s, binary subtree: %s", + os.platform or "unknown",os.type or "unknown", environment.texos or "unknown") end) register("luatex banner", function() return lower(status.banner) @@ -139,16 +137,25 @@ function statistics.show() return format("%s direct, %s indirect, %s total", total-indirect, indirect, total) end) if jit then - local status = { jit.status() } - if status[1] then - register("luajit status", function() - return concat(status," ",2) - end) + local jitstatus = { jit.status() } + if jitstatus[1] then + register("luajit options", concat(jitstatus," ",2)) end end -- so far -- collectgarbage("collect") - register("current memory usage",statistics.memused) + register("lua properties",function() + local list = status.list() + local hashchar = tonumber(list.luatex_hashchars) + local mask = lua.mask or "ascii" + return format("engine: %s, used memory: %s, hash type: %s, hash chars: min(%s,40), symbol mask: %s (%s)", + jit and "luajit" or "lua", + statistics.memused(), + list.luatex_hashtype or "default", + hashchar and 2^hashchar or "unknown", + mask, + mask == "utf" and "τεχ" or "tex") + end) register("runtime",statistics.runtime) logs.newline() -- initial newline for i=1,#statusinfo do diff --git a/tex/context/base/trac-log.lua b/tex/context/base/trac-log.lua index 0d0b66260..45cc550d4 100644 --- a/tex/context/base/trac-log.lua +++ b/tex/context/base/trac-log.lua @@ -6,6 +6,9 @@ if not modules then modules = { } end modules ['trac-log'] = { license = "see context related readme files" } +-- In fact all writes could go through lua and we could write the console and +-- terminal handler in lua then. Ok, maybe it's slower then, so a no-go. + -- if tex and (tex.jobname or tex.formatname) then -- -- -- quick hack, awaiting speedup in engine (8 -> 6.4 sec for --make with console2) @@ -535,9 +538,10 @@ local function setblocked(category,value) v.state = value end else - states = utilities.parsers.settings_to_hash(category) + states = utilities.parsers.settings_to_hash(category,type(states)=="table" and states or nil) for c, _ in next, states do - if data[c] then + local v = data[c] + if v then v.state = value else c = topattern(c,true,true) diff --git a/tex/context/base/trac-vis.lua b/tex/context/base/trac-vis.lua index 420e9a00d..eb5373ee3 100644 --- a/tex/context/base/trac-vis.lua +++ b/tex/context/base/trac-vis.lua @@ -32,6 +32,7 @@ local formatters = string.formatters -- todo: global switch (so no attributes) -- todo: maybe also xoffset, yoffset of glyph -- todo: inline concat (more efficient) +-- todo: tags can also be numbers (just add to hash) local nodecodes = nodes.nodecodes local disc_code = nodecodes.disc @@ -43,6 +44,7 @@ local glue_code = nodecodes.glue local penalty_code = nodecodes.penalty local whatsit_code = nodecodes.whatsit local user_code = nodecodes.user +local math_code = nodecodes.math local gluespec_code = nodecodes.gluespec local kerncodes = nodes.kerncodes @@ -58,6 +60,7 @@ local leftskip_code = gluecodes.leftskip local rightskip_code = gluecodes.rightskip local whatsitcodes = nodes.whatsitcodes +local mathcodes = nodes.mathcodes local nuts = nodes.nuts local tonut = nuts.tonut @@ -98,8 +101,10 @@ local unsetvalue = attributes.unsetvalue local current_font = font.current -local exheights = fonts.hashes.exheights -local emwidths = fonts.hashes.emwidths +local fonthashes = fonts.hashes +local chardata = fonthashes.characters +local exheights = fonthashes.exheights +local emwidths = fonthashes.emwidths local pt_factor = number.dimenfactors.pt local nodepool = nuts.pool @@ -138,6 +143,7 @@ local trace_fontkern local trace_strut local trace_whatsit local trace_user +local trace_math local report_visualize = logs.reporter("visualize") @@ -157,21 +163,22 @@ local modes = { simplevbox = 1024 + 2, simplevtop = 1024 + 4, user = 2048, + math = 4096, } local modes_makeup = { "hbox", "vbox", "kern", "glue", "penalty" } local modes_boxes = { "hbox", "vbox" } -local modes_all = { "hbox", "vbox", "kern", "glue", "penalty", "fontkern", "whatsit", "glyph", "user" } +local modes_all = { "hbox", "vbox", "kern", "glue", "penalty", "fontkern", "whatsit", "glyph", "user", "math" } local usedfont, exheight, emwidth -local l_penalty, l_glue, l_kern, l_fontkern, l_hbox, l_vbox, l_vtop, l_strut, l_whatsit, l_glyph, l_user +local l_penalty, l_glue, l_kern, l_fontkern, l_hbox, l_vbox, l_vtop, l_strut, l_whatsit, l_glyph, l_user, l_math local enabled = false local layers = { } local preset_boxes = modes.hbox + modes.vbox local preset_makeup = preset_boxes + modes.kern + modes.glue + modes.penalty -local preset_all = preset_makeup + modes.fontkern + modes.whatsit + modes.glyph + modes.user +local preset_all = preset_makeup + modes.fontkern + modes.whatsit + modes.glyph + modes.user + modes.math function visualizers.setfont(id) usedfont = id or current_font() @@ -208,6 +215,7 @@ local function enable() l_whatsit = layers.whatsit l_glyph = layers.glyph l_user = layers.user + l_math = layers.math nodes.tasks.enableaction("shipouts","nodes.visualizers.handler") report_visualize("enabled") enabled = true @@ -301,6 +309,7 @@ local c_skip_a = "trace:c" local c_skip_b = "trace:m" local c_glyph = "trace:o" local c_white = "trace:w" +local c_math = "trace:r" local c_positive_d = "trace:db" local c_negative_d = "trace:dr" @@ -311,6 +320,7 @@ local c_skip_a_d = "trace:dc" local c_skip_b_d = "trace:dm" local c_glyph_d = "trace:do" local c_white_d = "trace:dw" +local c_math_d = "trace:dr" local function sometext(str,layer,color,textcolor) -- we can just paste verbatim together .. no typesteting needed local text = fast_hpack_string(str,usedfont) @@ -369,8 +379,7 @@ local function fontkern(head,current) end local w_cache = { } - -local tags = { +local tags = { open = "FIC", write = "FIW", close = "FIC", @@ -417,15 +426,38 @@ local function whatsit(head,current) return head, current end +local u_cache = { } + local function user(head,current) local what = getsubtype(current) - local info = w_cache[what] + local info = u_cache[what] if info then -- print("hit user") else info = sometext(formatters["U:%s"](what),usedfont) setattr(info,a_layer,l_user) - w_cache[what] = info + u_cache[what] = info + end + head, current = insert_node_after(head,current,copy_list(info)) + return head, current +end + +local m_cache = { } +local tags = { + beginmath = "B", + endmath = "E", +} + +local function math(head,current) + local what = getsubtype(current) + local info = m_cache[what] + if info then + -- print("hit math") + else + local tag = mathcodes[what] + info = sometext(formatters["M:%s"](tag and tags[tag] or what),usedfont,nil,c_math_d) + setattr(info,a_layer,l_math) + m_cache[what] = info end head, current = insert_node_after(head,current,copy_list(info)) return head, current @@ -439,7 +471,7 @@ local function ruledbox(head,current,vertical,layer,what,simple,previous) local ht = getfield(current,"height") local dp = getfield(current,"depth") local next = getnext(current) - local prev = previous -- getprev(current) ... prev can be wrong in math mode + local prev = previous -- getprev(current) ... prev can be wrong in math mode < 0.78.3 setfield(current,"next",nil) setfield(current,"prev",nil) local linewidth = emwidth/10 @@ -538,6 +570,7 @@ end local function ruledglyph(head,current,previous) local wd = getfield(current,"width") + -- local wd = chardata[getfield(current,"font")][getfield(current,"char")].width if wd ~= 0 then local ht = getfield(current,"height") local dp = getfield(current,"depth") @@ -720,6 +753,7 @@ local function visualize(head,vertical) local trace_glyph = false local trace_simple = false local trace_user = false + local trace_math = false local current = head local previous = nil local attr = unsetvalue @@ -742,6 +776,7 @@ local function visualize(head,vertical) trace_glyph = false trace_simple = false trace_user = false + trace_math = false else -- dead slow: trace_hbox = hasbit(a, 1) trace_vbox = hasbit(a, 2) @@ -755,6 +790,7 @@ local function visualize(head,vertical) trace_glyph = hasbit(a, 512) trace_simple = hasbit(a,1024) trace_user = hasbit(a,2048) + trace_math = hasbit(a,4096) end attr = a end @@ -829,9 +865,13 @@ local function visualize(head,vertical) head, current = whatsit(head,current) end elseif id == user_code then - if trace_whatsit then + if trace_user then head, current = user(head,current) end + elseif id == math_code then + if trace_math then + head, current = math(head,current) + end end previous = current current = getnext(current) @@ -860,7 +900,7 @@ local function cleanup() nk, k_cache = freed(k_cache) nw, w_cache = freed(w_cache) nb, b_cache = freed(b_cache) - -- report_visualize("cache: %s fontkerns, %s skips, %s penalties, %s kerns, %s whatsits, %s boxes",nf,ng,np,nk,nw,nb) + -- report_visualize("cache cleanup: %s fontkerns, %s skips, %s penalties, %s kerns, %s whatsits, %s boxes",nf,ng,np,nk,nw,nb) end local function handler(head) @@ -931,9 +971,11 @@ function commands.markfonts(n) visualizers.markfonts(n) end +luatex.registerstopactions(cleanup) + statistics.register("visualization time",function() if enabled then - cleanup() -- in case we don't don't do it each time + -- cleanup() -- in case we don't don't do it each time return format("%s seconds",statistics.elapsedtime(visualizers)) end end) diff --git a/tex/context/base/type-imp-ebgaramond.mkiv b/tex/context/base/type-imp-ebgaramond.mkiv new file mode 100644 index 000000000..838654d49 --- /dev/null +++ b/tex/context/base/type-imp-ebgaramond.mkiv @@ -0,0 +1,45 @@ +%D \module +%D [ file=type-imp-ebgaramond, +%D version=2013.06.22, +%D title=\CONTEXT\ Typescript Macros, +%D subtitle=EB Garamond, +%D author=Hans Hagen, +%D date=\currentdate, +%D copyright={PRAGMA ADE \& \CONTEXT\ Development Team}] +%C +%C This module is part of the \CONTEXT\ macro||package and is +%C therefore copyrighted by \PRAGMA. See mreadme.pdf for +%C details. + +\definefontfeature + [eb-garamond-normal] + [default] + [mode=node,ccmp=yes,calt=yes, + liga=yes,dlig=yes,hlig=yes, + kern=yes,mark=yes,mkmk=yes, + onum=yes,pnum=yes,salt=yes, + script=latn] + +\definefontfeature + [eb-garamond-smallcaps] + [eb-garamond-normal] + [smcp=yes,c2sc=yes] + +\starttypescriptcollection[ebgaramond] + + \starttypescript [serif] [ebgaramond] + \loadfontgoodies[ebgaramond] + \setups[font:fallback:serif] + \definefontsynonym [Serif] [file:ebgaramond-regular] [features=eb-garamond-normal] + \definefontsynonym [SerifItalic] [file:ebgaramond-italic] [features=eb-garamond-normal] + \definefontsynonym [SerifBold] [file:ebgaramond-bold] [features=eb-garamond-normal] + \definefontsynonym [SerifCaps] [Serif] [features=eb-garamond-smallcaps] + \stoptypescript + + \starttypescript[ebgaramond] + \definetypeface [ebgaramond] [rm] [serif] [ebgaramond] [default] [designsize=auto] + \definetypeface [ebgaramond] [tt] [mono] [dejavu] [default] + \definetypeface [ebgaramond] [mm] [math] [bonum] [default] + \stoptypescript + +\stoptypescriptcollection diff --git a/tex/context/base/type-imp-latinmodern.mkiv b/tex/context/base/type-imp-latinmodern.mkiv index afe2c6417..fe4b669bd 100644 --- a/tex/context/base/type-imp-latinmodern.mkiv +++ b/tex/context/base/type-imp-latinmodern.mkiv @@ -71,11 +71,14 @@ \starttypescript [\s!math] [modern,latin-modern-designsize,latin-modern] [\s!name] \loadfontgoodies[lm] - \loadfontgoodies[lm-math] \definefontsynonym [\s!MathRoman] [LMMathRoman-Regular] \definefontsynonym [\s!MathRomanBold] [LMMathRoman-Bold] \stoptypescript + \starttypescript [\s!math] [latin-modern-designsize] [\s!name] + \loadfontgoodies[lm-math] + \stoptypescript + \starttypescript [\s!serif] [modern-variable,latin-modern-variable-designsize,latin-modern-variable] [\s!name] \loadfontgoodies[lm] \definefontsynonym [\s!Serif] [LMTypewriterVarWd-Regular] [\s!features=\s!default] diff --git a/tex/context/base/type-imp-texgyre.mkiv b/tex/context/base/type-imp-texgyre.mkiv index 24185f41d..b2aaa3629 100644 --- a/tex/context/base/type-imp-texgyre.mkiv +++ b/tex/context/base/type-imp-texgyre.mkiv @@ -153,7 +153,7 @@ \definetypeface [\typescriptone] [\s!rm] [\s!serif] [\typescriptone] [\s!default] \definetypeface [\typescriptone] [\s!ss] [\s!sans] [helvetica] [\s!default] [\s!rscale=0.9] \definetypeface [\typescriptone] [\s!tt] [\s!mono] [modern] [\s!default] [\s!rscale=1.05] - \definetypeface [\typescriptone] [\s!mm] [\s!math] [times] [\s!default] + \definetypeface [\typescriptone] [\s!mm] [\s!math] [termes] [\s!default] \quittypescriptscanning \stoptypescript @@ -161,7 +161,7 @@ \definetypeface [\typescriptone] [\s!rm] [\s!serif] [\typescriptone] [\s!default] \definetypeface [\typescriptone] [\s!ss] [\s!sans] [modern] [\s!default] [\s!rscale=1.075] \definetypeface [\typescriptone] [\s!tt] [\s!mono] [modern] [\s!default] [\s!rscale=1.075] - \definetypeface [\typescriptone] [\s!mm] [\s!math] [palatino] [\s!default] + \definetypeface [\typescriptone] [\s!mm] [\s!math] [pagella] [\s!default] \quittypescriptscanning \stoptypescript @@ -169,7 +169,7 @@ \definetypeface [\typescriptone] [\s!rm] [\s!serif] [\typescriptone] [\s!default] \definetypeface [\typescriptone] [\s!ss] [\s!sans] [modern] [\s!default] [\s!rscale=1.1] \definetypeface [\typescriptone] [\s!tt] [\s!mono] [modern] [\s!default] [\s!rscale=1.1] - \definetypeface [\typescriptone] [\s!mm] [\s!math] [modern] [\s!default] [\s!rscale=1.1] + \definetypeface [\typescriptone] [\s!mm] [\s!math] [schola] [\s!default] [\s!rscale=1.1] \quittypescriptscanning \stoptypescript @@ -277,3 +277,12 @@ \stoptypescript \stoptypescriptcollection + +\starttypescriptcollection[texgyre-math-schola] + + \starttypescript [\s!math][schoolbook,schola][\s!all] + \loadfontgoodies[texgyre] + \definefontsynonym[\s!MathRoman][file:texgyre-schola-math-regular.otf][\s!features=\s!math\mathsizesuffix] + \stoptypescript + +\stoptypescriptcollection diff --git a/tex/context/base/type-ini.mkvi b/tex/context/base/type-ini.mkvi index a4d576d80..faa9c667c 100644 --- a/tex/context/base/type-ini.mkvi +++ b/tex/context/base/type-ini.mkvi @@ -299,7 +299,7 @@ \let\typescripttwo \m_font_typescripts_two \let\typescriptthree\m_font_typescripts_three \let\m_font_typescripts_match\empty - \doifnextoptionalelse\font_typescripts_start_process_one\font_typescripts_start_process_all} + \doifnextoptionalcselse\font_typescripts_start_process_one\font_typescripts_start_process_all} \def\font_typescripts_start_process_all % could be a \let {\ifconditional\c_font_typescripts_first_pass @@ -333,10 +333,10 @@ {\font_typescripts_check\m_font_typescripts_three\typescriptthree\font_typescripts_start_process_again_three} \def\font_typescripts_start_process_again_one - {\doifnextoptionalelse\font_typescripts_start_process_two\font_typescripts_start_process_yes} + {\doifnextoptionalcselse\font_typescripts_start_process_two\font_typescripts_start_process_yes} \def\font_typescripts_start_process_again_two - {\doifnextoptionalelse\font_typescripts_start_process_three\font_typescripts_start_process_yes} + {\doifnextoptionalcselse\font_typescripts_start_process_three\font_typescripts_start_process_yes} \let\font_typescripts_start_process_again_three\font_typescripts_start_process_yes @@ -389,9 +389,9 @@ \unexpanded\def\forgetmapfiles {\ctxlua{fonts.mappings.reset()}} -\prependtoks - \loadmapfile[mkiv-base.map]% can't we preload this one? -\to \everystarttext +% \prependtoks +% \loadmapfile[mkiv-base.map]% can't we preload this one? +% \to \everystarttext %D A handy shortcut: diff --git a/tex/context/base/typo-brk.lua b/tex/context/base/typo-brk.lua index be11da9c3..f9a65c6ba 100644 --- a/tex/context/base/typo-brk.lua +++ b/tex/context/base/typo-brk.lua @@ -32,9 +32,8 @@ local getchar = nuts.getchar local getfont = nuts.getfont local getid = nuts.getid local getfield = nuts.getfield -local getattr = nuts.getattr - local setfield = nuts.setfield +local getattr = nuts.getattr local setattr = nuts.setattr local copy_node = nuts.copy @@ -108,7 +107,7 @@ methods[2] = function(head,start) -- ( => (- local tmp head, start, tmp = remove_node(head,start) head, start = insert_node_before(head,start,new_disc()) - setfield(start,"attr",copy_nodelist(getfield(tmp,"attr"))) + setfield(start,"attr",copy_nodelist(getfield(tmp,"attr"))) -- just a copy will do setfield(start,"replace",tmp) local tmp = copy_node(tmp) local hyphen = copy_node(tmp) @@ -126,7 +125,7 @@ methods[3] = function(head,start) -- ) => -) local tmp head, start, tmp = remove_node(head,start) head, start = insert_node_before(head,start,new_disc()) - setfield(start,"attr",copy_nodelist(getfield(tmp,"attr"))) + setfield(start,"attr",copy_nodelist(getfield(tmp,"attr"))) -- just a copy will do setfield(start,"replace",tmp) local tmp = copy_node(tmp) local hyphen = copy_node(tmp) @@ -144,7 +143,7 @@ methods[4] = function(head,start) -- - => - - - local tmp head, start, tmp = remove_node(head,start) head, start = insert_node_before(head,start,new_disc()) - setfield(start,"attr",copy_nodelist(getfield(tmp,"attr"))) + setfield(start,"attr",copy_nodelist(getfield(tmp,"attr"))) -- just a copy will do setfield(start,"pre",copy_node(tmp)) setfield(start,"post",copy_node(tmp)) setfield(start,"replace",tmp) @@ -172,7 +171,7 @@ methods[5] = function(head,start,settings) -- x => p q r if middle then setfield(start,"replace",(tonodes(tostring(middle),font,attr))) end - setfield(start,"attr",copy_nodelist(attr)) -- todo: critical only + setfield(start,"attr",copy_nodelist(attr)) -- todo: critical only -- just a copy will do free_node(tmp) insert_break(head,start,10000,10000) end diff --git a/tex/context/base/typo-cln.lua b/tex/context/base/typo-cln.lua index b7e337662..8b1ac7876 100644 --- a/tex/context/base/typo-cln.lua +++ b/tex/context/base/typo-cln.lua @@ -34,6 +34,7 @@ local tonut = nuts.tonut local setfield = nuts.setfield local getchar = nuts.getchar local getattr = nuts.getattr +local setattr = nuts.setattr local traverse_id = nuts.traverse_id diff --git a/tex/context/base/typo-dha.lua b/tex/context/base/typo-dha.lua index 15e345ff8..3410c2dfc 100644 --- a/tex/context/base/typo-dha.lua +++ b/tex/context/base/typo-dha.lua @@ -65,13 +65,14 @@ local getfield = nuts.getfield local setfield = nuts.setfield local getattr = nuts.getattr local setattr = nuts.setattr +local getprop = nuts.getprop +local setprop = nuts.setprop local insert_node_before = nuts.insert_before local insert_node_after = nuts.insert_after local remove_node = nuts.remove local end_of_math = nuts.end_of_math - local nodepool = nuts.pool local nodecodes = nodes.nodecodes @@ -240,7 +241,7 @@ local function process(start) end elseif lro or override < 0 then if direction == "r" or direction == "al" then - setattr(current,a_state,s_isol) + setprop(current,a_state,s_isol) direction = "l" reversed = true end diff --git a/tex/context/base/typo-dir.lua b/tex/context/base/typo-dir.lua index fbca0f024..e7d3c686c 100644 --- a/tex/context/base/typo-dir.lua +++ b/tex/context/base/typo-dir.lua @@ -33,99 +33,41 @@ local formatters = string.formatters local nodes, node = nodes, node -local trace_textdirections = false trackers.register("typesetters.directions.text", function(v) trace_textdirections = v end) -local trace_mathdirections = false trackers.register("typesetters.directions.math", function(v) trace_mathdirections = v end) -local trace_directions = false trackers.register("typesetters.directions", function(v) trace_textdirections = v trace_mathdirections = v end) +local trace_textdirections = false trackers.register("typesetters.directions.text", function(v) trace_textdirections = v end) +local trace_mathdirections = false trackers.register("typesetters.directions.math", function(v) trace_mathdirections = v end) +local trace_directions = false trackers.register("typesetters.directions", function(v) trace_textdirections = v trace_mathdirections = v end) local report_textdirections = logs.reporter("typesetting","text directions") local report_mathdirections = logs.reporter("typesetting","math directions") -local nuts = nodes.nuts -local tonut = nuts.tonut -local tonode = nuts.tonode -local nutstring = nuts.tostring - -local getnext = nuts.getnext -local getprev = nuts.getprev -local getfont = nuts.getfont -local getchar = nuts.getchar -local getid = nuts.getid -local getsubtype = nuts.getsubtype -local getlist = nuts.getlist -local getfield = nuts.getfield -local setfield = nuts.setfield -local getattr = nuts.getattr -local setattr = nuts.setattr - -local hasbit = number.hasbit - -local traverse_id = nuts.traverse_id -local insert_node_before = nuts.insert_before -local insert_node_after = nuts.insert_after -local remove_node = nuts.remove -local end_of_math = nuts.end_of_math - -local texsetattribute = tex.setattribute -local texsetcount = tex.setcount -local unsetvalue = attributes.unsetvalue - -local nodecodes = nodes.nodecodes -local whatcodes = nodes.whatcodes -local mathcodes = nodes.mathcodes - -local tasks = nodes.tasks -local tracers = nodes.tracers -local setcolor = tracers.colors.set -local resetcolor = tracers.colors.reset - -local glyph_code = nodecodes.glyph -local whatsit_code = nodecodes.whatsit -local math_code = nodecodes.math -local penalty_code = nodecodes.penalty -local kern_code = nodecodes.kern -local glue_code = nodecodes.glue -local hlist_code = nodecodes.hlist -local vlist_code = nodecodes.vlist - -local localpar_code = whatcodes.localpar -local dir_code = whatcodes.dir - -local nodepool = nuts.pool - -local new_textdir = nodepool.textdir - -local fonthashes = fonts.hashes -local fontdata = fonthashes.identifiers -local fontchar = fonthashes.characters - -local chardirections = characters.directions -local charmirrors = characters.mirrors -local charclasses = characters.textclasses - -local directions = typesetters.directions or { } -typesetters.directions = directions - -local a_state = attributes.private('state') -local a_directions = attributes.private('directions') -local a_mathbidi = attributes.private('mathbidi') - -local strip = false - -local s_isol = fonts.analyzers.states.isol - -local variables = interfaces.variables -local v_global = variables["global"] -local v_local = variables["local"] -local v_on = variables.on -local v_yes = variables.yes - -local m_enabled = 2^6 -- 64 -local m_global = 2^7 -local m_fences = 2^8 - -local handlers = { } -local methods = { } -local lastmethod = 0 +local hasbit = number.hasbit + +local texsetattribute = tex.setattribute +local unsetvalue = attributes.unsetvalue + +local tasks = nodes.tasks +local tracers = nodes.tracers +local setcolor = tracers.colors.set +local resetcolor = tracers.colors.reset + +local directions = typesetters.directions or { } +typesetters.directions = directions + +local a_directions = attributes.private('directions') + +local variables = interfaces.variables +local v_global = variables["global"] +local v_local = variables["local"] +local v_on = variables.on +local v_yes = variables.yes + +local m_enabled = 2^6 -- 64 +local m_global = 2^7 +local m_fences = 2^8 + +local handlers = { } +local methods = { } +local lastmethod = 0 local function installhandler(name,handler) local method = methods[name] diff --git a/tex/context/base/typo-drp.lua b/tex/context/base/typo-drp.lua index 3a87d94b3..9151100b6 100644 --- a/tex/context/base/typo-drp.lua +++ b/tex/context/base/typo-drp.lua @@ -32,9 +32,8 @@ local getchar = nuts.getchar local getid = nuts.getid local getsubtype = nuts.getsubtype local getfield = nuts.getfield -local getattr = nuts.getattr - local setfield = nuts.setfield +local getattr = nuts.getattr local setattr = nuts.setattr local hpack_nodes = nuts.hpack diff --git a/tex/context/base/typo-dua.lua b/tex/context/base/typo-dua.lua index 91a27a30e..73b00f033 100644 --- a/tex/context/base/typo-dua.lua +++ b/tex/context/base/typo-dua.lua @@ -80,6 +80,7 @@ local getfield = nuts.getfield local setfield = nuts.setfield local remove_node = nuts.remove +local copy_node = nuts.copy local insert_node_after = nuts.insert_after local insert_node_before = nuts.insert_before @@ -106,7 +107,7 @@ local maximum_stack = 60 -- probably spec but not needed local directions = typesetters.directions local setcolor = directions.setcolor -local a_directions = attributes.private('directions') +----- a_directions = attributes.private('directions') local remove_controls = true directives.register("typesetters.directions.one.removecontrols",function(v) remove_controls = v end) @@ -708,20 +709,26 @@ local function apply_to_list(list,size,head,pardir) elseif id == glue_code then if enddir and getsubtype(current) == parfillskip_code then -- insert the last enddir before \parfillskip glue - head = insert_node_before(head,current,new_textdir(enddir)) + local d = new_textdir(enddir) +-- setfield(d,"attr",copy_node(getfield(current,"attr"))) + head = insert_node_before(head,current,d) enddir = false done = true end elseif id == whatsit_code then if begindir and getsubtype(current) == localpar_code then -- local_par should always be the 1st node - head, current = insert_node_after(head,current,new_textdir(begindir)) + local d = new_textdir(begindir) +-- setfield(d,"attr",copy_node(getfield(current,"attr"))) + head, current = insert_node_after(head,current,d) begindir = nil done = true end end if begindir then - head = insert_node_before(head,current,new_textdir(begindir)) + local d = new_textdir(begindir) +-- setfield(d,"attr",copy_node(getfield(current,"attr"))) + head = insert_node_before(head,current,d) done = true end local skip = entry.skip @@ -731,7 +738,9 @@ local function apply_to_list(list,size,head,pardir) end end if enddir then - head, current = insert_node_after(head,current,new_textdir(enddir)) + local d = new_textdir(enddir) +-- setfield(d,"attr",copy_node(getfield(current,"attr"))) + head, current = insert_node_after(head,current,d) done = true end if not entry.remove then diff --git a/tex/context/base/typo-dub.lua b/tex/context/base/typo-dub.lua index 4dc0f21fb..b6581137b 100644 --- a/tex/context/base/typo-dub.lua +++ b/tex/context/base/typo-dub.lua @@ -65,10 +65,12 @@ local getid = nuts.getid local getsubtype = nuts.getsubtype local getlist = nuts.getlist local getattr = nuts.getattr +local setattr = nuts.setattr local getfield = nuts.getfield local setfield = nuts.setfield local remove_node = nuts.remove +local copy_node = nuts.copy local insert_node_after = nuts.insert_after local insert_node_before = nuts.insert_before @@ -97,11 +99,11 @@ local getfences = directions.getfences local a_directions = attributes.private('directions') local a_textbidi = attributes.private('textbidi') -local a_state = attributes.private('state') +----- a_state = attributes.private('state') -local s_isol = fonts.analyzers.states.isol +----- s_isol = fonts.analyzers.states.isol --- current[a_state] = s_isol -- maybe better have a special bidi attr value -> override (9) -> todo +----- current[a_state] = s_isol -- maybe better have a special bidi attr value -> override (9) -> todo local remove_controls = true directives.register("typesetters.directions.removecontrols",function(v) remove_controls = v end) ----- analyze_fences = true directives.register("typesetters.directions.analyzefences", function(v) analyze_fences = v end) @@ -817,20 +819,26 @@ local function apply_to_list(list,size,head,pardir) elseif id == glue_code then if enddir and getsubtype(current) == parfillskip_code then -- insert the last enddir before \parfillskip glue - head = insert_node_before(head,current,new_textdir(enddir)) + local d = new_textdir(enddir) +-- setfield(d,"attr",copy_node(getfield(current,"attr"))) + head = insert_node_before(head,current,d) enddir = false done = true end elseif id == whatsit_code then if begindir and getsubtype(current) == localpar_code then -- local_par should always be the 1st node - head, current = insert_node_after(head,current,new_textdir(begindir)) + local d = new_textdir(begindir) +-- setfield(d,"attr",copy_node(getfield(current,"attr"))) + head, current = insert_node_after(head,current,d) begindir = nil done = true end end if begindir then - head = insert_node_before(head,current,new_textdir(begindir)) + local d = new_textdir(begindir) +-- setfield(d,"attr",copy_node(getfield(current,"attr"))) + head = insert_node_before(head,current,d) done = true end local skip = entry.skip @@ -840,7 +848,9 @@ local function apply_to_list(list,size,head,pardir) end end if enddir then - head, current = insert_node_after(head,current,new_textdir(enddir)) + local d = new_textdir(enddir) +-- setfield(d,"attr",copy_node(getfield(current,"attr"))) + head, current = insert_node_after(head,current,d) done = true end if not entry.remove then diff --git a/tex/context/base/typo-fln.lua b/tex/context/base/typo-fln.lua index 7ce41cd81..884a4c829 100644 --- a/tex/context/base/typo-fln.lua +++ b/tex/context/base/typo-fln.lua @@ -30,12 +30,11 @@ local tonode = nuts.tonode local getnext = nuts.getnext local getid = nuts.getid local getfield = nuts.getfield +local setfield = nuts.setfield local getlist = nuts.getlist local getattr = nuts.getattr -local getbox = nuts.getbox - -local setfield = nuts.setfield local setattr = nuts.setattr +local getbox = nuts.getbox local nodecodes = nodes.nodecodes local glyph_code = nodecodes.glyph diff --git a/tex/context/base/typo-fln.mkiv b/tex/context/base/typo-fln.mkiv index d8651b459..c092fc922 100644 --- a/tex/context/base/typo-fln.mkiv +++ b/tex/context/base/typo-fln.mkiv @@ -79,7 +79,7 @@ \begingroup \edef\currentfirstline{#1}% \usefirstlinestyleandcolor\c!style\c!color - \ctxlua{commands.setfirstline { + \ctxcommand{setfirstline { alternative = "\firstlineparameter\c!alternative", ma = \the\attribute\colormodelattribute, ca = \the\attribute\colorattribute, diff --git a/tex/context/base/typo-itc.lua b/tex/context/base/typo-itc.lua index db94c5c54..7373c0321 100644 --- a/tex/context/base/typo-itc.lua +++ b/tex/context/base/typo-itc.lua @@ -37,6 +37,7 @@ local getid = nuts.getid local getfont = nuts.getfont local getchar = nuts.getchar local getattr = nuts.getattr +local setattr = nuts.setattr local insert_node_after = nuts.insert_after local delete_node = nuts.delete diff --git a/tex/context/base/typo-krn.mkiv b/tex/context/base/typo-krn.mkiv index 3522c02fc..92689f07b 100644 --- a/tex/context/base/typo-krn.mkiv +++ b/tex/context/base/typo-krn.mkiv @@ -70,7 +70,7 @@ % \definecharacterkerning [\v!letterspacing ] [\v!kerncharacters] [\c!features=letterspacing] % % \unexpanded\def\kerncharacters -% {\doifnextoptionalelse\typo_kerning_apply_yes\typo_kerning_apply_nop} +% {\doifnextoptionalcselse\typo_kerning_apply_yes\typo_kerning_apply_nop} % % \def\typo_kerning_apply_yes[#1]% % {\groupedcommand{\typo_kerning_apply_yes_indeed{#1}}\donothing} diff --git a/tex/context/base/typo-mar.lua b/tex/context/base/typo-mar.lua index 4bfc107ad..4ea6b1e1d 100644 --- a/tex/context/base/typo-mar.lua +++ b/tex/context/base/typo-mar.lua @@ -76,6 +76,7 @@ if not modules then modules = { } end modules ['typo-mar'] = { local format, validstring = string.format, string.valid local insert, remove = table.insert, table.remove local setmetatable, next = setmetatable, next +local formatters = string.formatters local attributes, nodes, node, variables = attributes, nodes, node, variables @@ -170,6 +171,8 @@ local new_stretch = nodepool.stretch local new_usernumber = nodepool.usernumber local new_latelua = nodepool.latelua +local lateluafunction = nodepool.lateluafunction + local texgetcount = tex.getcount local texgetdimen = tex.getdimen local texget = tex.get @@ -179,13 +182,15 @@ local points = number.points local isleftpage = layouts.status.isleftpage local registertogether = builders.paragraphs.registertogether -- tonode -local jobpositions = job.positions -local getposition = jobpositions.position - local a_margindata = attributes.private("margindata") local inline_mark = nodepool.userids["margins.inline"] +local jobpositions = job.positions +local getposition = jobpositions.get +local setposition = jobpositions.set +local getreserved = jobpositions.getreserved + local margins = { } typesetters.margins = margins @@ -368,6 +373,16 @@ end local status, nofstatus = { }, 0 +local f_anchor = formatters["_plib_.set('md:h',%i,{x=true,c=true})"] +local function setanchor(h_anchor) + return new_latelua(f_anchor(h_anchor)) +end + +-- local t_anchor = { x = true, c = true } +-- local function setanchor(h_anchor) +-- return lateluafunction(function() setposition("md:h",h_anchor,t_anchor) end) +-- end + local function realign(current,candidate) local location = candidate.location local margin = candidate.margin @@ -436,10 +451,10 @@ local function realign(current,candidate) if inline or anchor ~= v_text or candidate.psubtype == alignment_code then -- the alignment_code check catches margintexts ste before a tabulate h_anchors = h_anchors + 1 - anchornode = new_latelua(format("_plib_.set('md:h',%i,{x=true,c=true})",h_anchors)) - local blob = jobpositions.get('md:h', h_anchors) + anchornode = setanchor(h_anchors) + local blob = getposition('md:h',h_anchors) if blob then - local reference = jobpositions.getreserved(anchor,blob.c) + local reference = getreserved(anchor,blob.c) if reference then if location == v_left then move_x = (reference.x or 0) - (blob.x or 0) @@ -494,25 +509,36 @@ end -- resetstacked() -function margins.ha(tag) -- maybe l/r keys ipv left/right keys +local function ha(tag) -- maybe l/r keys ipv left/right keys local p = cache[tag] p.p = true p.y = true - jobpositions.set('md:v',tag,p) + setposition('md:v',tag,p) cache[tag] = nil end -local function markovershoot(current) +margins.ha = ha + +local f_anchor = formatters["typesetters.margins.ha(%s)"] +local function setanchor(v_anchor) + return new_latelua(f_anchor(v_anchor)) +end + +-- local function setanchor(v_anchor) -- freezes the global here +-- return lateluafunction(function() ha(v_anchor) end) +-- end + +local function markovershoot(current) -- todo: alleen als offset > line v_anchors = v_anchors + 1 cache[v_anchors] = stacked - local anchor = new_latelua(format("typesetters.margins.ha(%s)",v_anchors)) -- todo: alleen als offset > line + local anchor = setanchor(v_anchors) local list = hpack_nodes(linked_nodes(anchor,getlist(current))) setfield(current,"list",list) end local function getovershoot(location) - local p = jobpositions.get("md:v",v_anchors) - local c = jobpositions.get("md:v",v_anchors+1) + local p = getposition("md:v",v_anchors) + local c = getposition("md:v",v_anchors+1) if p and c and p.p and p.p == c.p then local distance = p.y - c.y local offset = p[location] or 0 @@ -901,3 +927,5 @@ statistics.register("margin data", function() return nil end end) + +commands.savemargindata = margins.save diff --git a/tex/context/base/typo-mar.mkiv b/tex/context/base/typo-mar.mkiv index 595cf3756..2b89f5777 100644 --- a/tex/context/base/typo-mar.mkiv +++ b/tex/context/base/typo-mar.mkiv @@ -258,11 +258,13 @@ \fi \ifdone \anch_positions_initialize % we use positions at the lua end - \ctxlua{typesetters.margins.save{ + \ctxcommand{savemargindata{ location = "\margindataparameter\c!location", method = "\margindataparameter\c!method", category = "\margindataparameter\c!category", name = "\margindataparameter\c!name", + scope = "\margindataparameter\c!scope", + number = \number\nextbox, margin = "\margindataparameter\c!margin", % local normal margin edge distance = \number\dimexpr\margindataparameter\c!distance, hoffset = \number\dimexpr\margindataparameter\c!hoffset, @@ -286,14 +288,12 @@ % \ifzeropt\leftskip \else % rightskip = \number\rightskip, % \fi - scope = "\margindataparameter\c!scope", align = "\margindataparameter\c!align", line = \number\margindataparameter\c!line, stack = "\margindataparameter\c!stack", - number = \number\nextbox, }}% \else - \ctxlua{typesetters.margins.save{ + \ctxcommand{savemargindata{ location = "\margindataparameter\c!location", method = "\margindataparameter\c!method", category = "\margindataparameter\c!category", diff --git a/tex/context/base/typo-prc.lua b/tex/context/base/typo-prc.lua index a6c27ede6..959cabbb8 100644 --- a/tex/context/base/typo-prc.lua +++ b/tex/context/base/typo-prc.lua @@ -14,13 +14,16 @@ local lpegmatch, patterns, P, C, Cs = lpeg.match, lpeg.patterns, lpeg.P, lpeg.C, -- processors: syntax: processor->data ... not ok yet -typesetters.processors = typesetters.processors or { } -local processors = typesetters.processors +typesetters.processors = typesetters.processors or { } +local processors = typesetters.processors local trace_processors = false local report_processors = logs.reporter("processors") local registered = { } +context_applyprocessor = context.applyprocessor +context_firstofoneargument = context.firstofoneargument + trackers.register("typesetters.processors", function(v) trace_processors = v end) function processors.register(p) @@ -55,7 +58,7 @@ function processors.apply(p,s) if trace_processors then report_processors("applying %s processor %a, argument: %s","known",p,s) end - context.applyprocessor(p,s) + context_applyprocessor(p,s) elseif s then if trace_processors then report_processors("applying %s processor %a, argument: %s","unknown",p,s) @@ -78,21 +81,21 @@ function processors.startapply(p,s) if trace_processors then report_processors("start applying %s processor %a","known",p) end - context.applyprocessor(p) + context_applyprocessor(p) context("{") return s elseif p then if trace_processors then report_processors("start applying %s processor %a","unknown",p) end - context.firstofoneargument() + context_firstofoneargument() context("{") return s else if trace_processors then report_processors("start applying %s processor","ignored") end - context.firstofoneargument() + context_firstofoneargument() context("{") return str end diff --git a/tex/context/base/typo-rep.lua b/tex/context/base/typo-rep.lua index 95b801e2e..15e3f9746 100644 --- a/tex/context/base/typo-rep.lua +++ b/tex/context/base/typo-rep.lua @@ -27,8 +27,8 @@ local tonode = nuts.tonode local getnext = nuts.getnext local getchar = nuts.getchar local getid = nuts.getid -local getattr = nuts.getid +local getattr = nuts.getattr local setattr = nuts.setattr local delete_node = nuts.delete diff --git a/tex/context/base/typo-spa.lua b/tex/context/base/typo-spa.lua index 5cf9ab837..eb84eb7d7 100644 --- a/tex/context/base/typo-spa.lua +++ b/tex/context/base/typo-spa.lua @@ -36,7 +36,6 @@ local getchar = nuts.getchar local getid = nuts.getid local getfont = nuts.getfont local getattr = nuts.getattr - local setattr = nuts.setattr local insert_node_before = nuts.insert_before diff --git a/tex/context/base/typo-tal.lua b/tex/context/base/typo-tal.lua index debcedfd3..3a2d80e51 100644 --- a/tex/context/base/typo-tal.lua +++ b/tex/context/base/typo-tal.lua @@ -29,11 +29,11 @@ local getprev = nuts.getprev local getid = nuts.getid local getfont = nuts.getfont local getchar = nuts.getchar -local getattr = nuts.getattr local getfield = nuts.getfield +local setfield = nuts.setfield +local getattr = nuts.getattr local setattr = nuts.setattr -local setfield = nuts.setfield local insert_node_before = nuts.insert_before local insert_node_after = nuts.insert_after diff --git a/tex/context/base/typo-txt.mkvi b/tex/context/base/typo-txt.mkvi index 57f4e5f42..fa79a4f6b 100644 --- a/tex/context/base/typo-txt.mkvi +++ b/tex/context/base/typo-txt.mkvi @@ -17,7 +17,7 @@ \unprotect -\registerctxluafile{typo-txt}{1.001} +% registerctxluafile{typo-txt}{1.001} %D \macros %D {normalizefontheight,normalizefontwidth,normalizedfontsize} diff --git a/tex/context/base/util-env.lua b/tex/context/base/util-env.lua index 0a708ea43..e96a464b0 100644 --- a/tex/context/base/util-env.lua +++ b/tex/context/base/util-env.lua @@ -197,7 +197,7 @@ function environment.reconstructcommandline(arg,noquote) a = resolvers.resolve(a) a = unquoted(a) a = gsub(a,'"','\\"') -- tricky - if find(a," ") then + if find(a," ",1,true) then result[#result+1] = quoted(a) else result[#result+1] = a diff --git a/tex/context/base/util-prs.lua b/tex/context/base/util-prs.lua index e5b35a727..2cede919b 100644 --- a/tex/context/base/util-prs.lua +++ b/tex/context/base/util-prs.lua @@ -179,12 +179,12 @@ function parsers.settings_to_array(str,strict) elseif not str or str == "" then return { } elseif strict then - if find(str,"{") then + if find(str,"{",1,true) then return lpegmatch(pattern,str) else return { str } end - elseif find(str,",") then + elseif find(str,",",1,true) then return lpegmatch(pattern,str) else return { str } diff --git a/tex/context/base/util-sci.lua b/tex/context/base/util-sci.lua new file mode 100644 index 000000000..98b05fe75 --- /dev/null +++ b/tex/context/base/util-sci.lua @@ -0,0 +1,262 @@ +local gsub, sub, find = string.gsub, string.sub, string.find +local concat = table.concat +local formatters = string.formatters +local lpegmatch = lpeg.match +local setmetatableindex = table.setmetatableindex + +local scite = scite or { } +utilities.scite = scite + +local report = logs.reporter("scite") + +local lexerroot = file.dirname(resolvers.find_file("scite-context-lexer.lua")) + +local knownlexers = { + tex = "tex", mkiv = "tex", mkvi = "tex", mkxi = "tex", mkix = "tex", mkii = "tex", cld = "tex", + lua = "lua", lfg = "lua", lus = "lua", + w = "web", ww = "web", + c = "cpp", h = "cpp", cpp = "cpp", hpp = "cpp", cxx = "cpp", hxx = "cpp", + xml = "xml", lmx = "xml", ctx = "xml", xsl = "xml", xsd = "xml", rlx = "xml", css = "xml", dtd = "xml", + bib = "bibtex", + rme = "txt", + -- todo: pat/hyp ori +} + +lexer = nil -- main lexer, global (for the moment needed for themes) + +local function loadscitelexer() + if not lexer then + dir.push(lexerroot) + lexer = dofile("scite-context-lexer.lua") + dofile("themes/scite-context-theme.lua") + dir.pop() + end + return lexer +end + +local loadedlexers = setmetatableindex(function(t,k) + local l = knownlexers[k] or k + dir.push(lexerroot) + loadscitelexer() + local v = lexer.load(formatters["scite-context-lexer-%s"](l)) + dir.pop() + t[l] = v + t[k] = v + return v +end) + +scite.loadedlexers = loadedlexers +scite.knownlexers = knownlexers +scite.loadscitelexer = loadscitelexer + +local f_fore_bold = formatters['.%s { display: inline ; font-weight: bold ; color: #%s%s%s ; }'] +local f_fore_none = formatters['.%s { display: inline ; font-weight: normal ; color: #%s%s%s ; }'] +local f_none_bold = formatters['.%s { display: inline ; font-weight: bold ; }'] +local f_none_none = formatters['.%s { display: inline ; font-weight: normal ; }'] +local f_div_class = formatters['<div class="%s">%s</div>'] +local f_linenumber = formatters['\n<div class="linenumber">%s</div>'] +local f_div_number = formatters['.linenumber { display: inline-block ; font-weight: normal ; width: %sem ; margin-right: 2em ; padding-right: .25em ; text-align: right ; background-color: #C7C7C7 ; }'] + +local replacer_regular = lpeg.replacer { + ["<"] = "<", + [">"] = ">", + ["&"] = "&", +} + +local linenumber = 0 + +local replacer_numbered = lpeg.replacer { + ["<"] = "<", + [">"] = ">", + ["&"] = "&", + [lpeg.patterns.newline] = function() linenumber = linenumber + 1 return f_linenumber(linenumber) end, +} + +local css = nil + +local function exportcsslexing() + if not css then + loadscitelexer() + local function black(f) + return (f[1] == f[2]) and (f[2] == f[3]) and (f[3] == '00') + end + local result, r = { }, 0 + for k, v in table.sortedhash(lexer.context.styles) do + local bold = v.bold + local fore = v.fore + r = r + 1 + if fore and not black(fore) then + if bold then + result[r] = f_fore_bold(k,fore[1],fore[2],fore[3]) + else + result[r] = f_fore_none(k,fore[1],fore[2],fore[3]) + end + else + if bold then + result[r] = f_none_bold(k) + else + result[r] = f_none_none(k) + end + end + end + css = concat(result,"\n") + end + return css +end + +local function exportwhites() + return setmetatableindex(function(t,k) + local v = find(k,"white") and true or false + t[k] = v + return v + end) +end + +local function exportstyled(lexer,text,numbered) + local result = lexer.lex(lexer,text,0) + local start = 1 + local whites = exportwhites() + local buffer, b = { "<pre>" }, 1 + linenumber = 1 + local replacer = numbered and replacer_numbered or replacer_regular + if numbered then + b = b + 1 + buffer[b] = f_linenumber(1) + end + local n = #result + for i=1,n,2 do + local ii = i + 1 + local style = result[i] + local position = result[ii] + local txt = sub(text,start,position-1) + if ii == n then + txt = gsub(txt,"[%s]+$","") + end + txt = lpegmatch(replacer,txt) + b = b + 1 + if whites[style] then + buffer[b] = txt + else + buffer[b] = f_div_class(style,txt) + end + start = position + end + buffer[b+1] = "</pre>" + buffer = concat(buffer) + return buffer +end + +local function exportcsslinenumber() + return f_div_number(#tostring(linenumber)/2+1) +end + +local htmlfile = utilities.templates.replacer([[ +<?xml version="1.0"?> +<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> + <html xmlns="http://www.w3.org/1999/xhtml"> + <title>context util-sci web page: text</title> + <meta http-equiv="content-type" content="text/html; charset=UTF-8"/> + <style type="text/css"><!-- +%lexingstyles% +%numberstyles% + --></style> + <body> +%lexedcontent% + </body> +</html> +]]) + +function scite.tohtml(data,lexname,numbered) + return htmlfile { + lexedcontent = exportstyled(loadedlexers[lexname],data or "",numbered), -- before numberstyles + lexingstyles = exportcsslexing(), + numberstyles = exportcsslinenumber(), + } +end + +function scite.filetohtml(filename,lexname,targetname,numbered) + io.savedata(targetname or "util-sci.html",scite.tohtml(io.loaddata(filename),lexname or file.suffix(filename),numbered)) +end + +function scite.css() + return exportcsslexing() .. "\n" .. exportcsslinenumber() +end + +function scite.html(data,lexname,numbered) + return exportstyled(loadedlexers[lexname],data or "",numbered) +end + +local f_tree_entry = formatters['<a href="%s" class="dir-entry">%s</a>'] + +local htmlfile = utilities.templates.replacer([[ +<?xml version="1.0"?> +<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> + <html xmlns="http://www.w3.org/1999/xhtml"> + <title>context util-sci web page: text</title> + <meta http-equiv="content-type" content="text/html; charset=UTF-8"/> + <style type="text/css"><!-- +%styles% + --></style> + <body> + <pre> +%dirlist% + </pre> + </body> +</html> +]]) + +function scite.converttree(sourceroot,targetroot,numbered) + if lfs.isdir(sourceroot) then + statistics.starttiming() + local skipped = { } + local noffiles = 0 + dir.makedirs(targetroot) + local function scan(sourceroot,targetroot) + local tree = { } + for name in lfs.dir(sourceroot) do + if name ~= "." and name ~= ".." then + local sourcename = file.join(sourceroot,name) + local targetname = file.join(targetroot,name) + local mode = lfs.attributes(sourcename,'mode') + if mode == 'file' then + local filetype = file.suffix(sourcename) + local basename = file.basename(name) + local targetname = file.replacesuffix(targetname,"html") + if knownlexers[filetype] then + report("converting file %a to %a",sourcename,targetname) + scite.filetohtml(sourcename,nil,targetname,numbered) + noffiles = noffiles + 1 + tree[#tree+1] = f_tree_entry(file.basename(targetname),basename) + else + skipped[filetype] = true + report("no lexer for %a",sourcename) + end + else + dir.makedirs(targetname) + scan(sourcename,targetname) + tree[#tree+1] = f_tree_entry(file.join(name,"files.html"),name) + end + end + end + report("saving tree in %a",treename) + local htmldata = htmlfile { + dirlist = concat(tree,"\n"), + styles = "", + } + io.savedata(file.join(targetroot,"files.html"),htmldata) + end + scan(sourceroot,targetroot) + if next(skipped) then + report("skipped filetypes: %a",table.concat(table.sortedkeys(skipped)," ")) + end + statistics.stoptiming() + report("conversion time for %s files: %s",noffiles,statistics.elapsedtime()) + end +end + +-- scite.filetohtml("strc-sec.mkiv",nil,"e:/tmp/util-sci.html",true) +-- scite.filetohtml("syst-aux.mkiv",nil,"e:/tmp/util-sci.html",true) + +-- scite.converttree("t:/texmf/tex/context","e:/tmp/html/context",true) + +return scite diff --git a/tex/context/base/util-str.lua b/tex/context/base/util-str.lua index 4ecaed7d3..52c48badd 100644 --- a/tex/context/base/util-str.lua +++ b/tex/context/base/util-str.lua @@ -47,10 +47,12 @@ if not number then number = { } end -- temp hack for luatex-fonts local stripper = patterns.stripzeros local function points(n) + n = tonumber(n) return (not n or n == 0) and "0pt" or lpegmatch(stripper,format("%.5fpt",n/65536)) end local function basepoints(n) + n = tonumber(n) return (not n or n == 0) and "0bp" or lpegmatch(stripper,format("%.5fbp", n*(7200/7227)/65536)) end @@ -152,17 +154,105 @@ end -- print(strings.tabtospace(t[k])) -- end -function strings.striplong(str) -- strips all leading spaces - str = gsub(str,"^%s*","") - str = gsub(str,"[\n\r]+ *","\n") - return str +-- todo: lpeg + +-- function strings.striplong(str) -- strips all leading spaces +-- str = gsub(str,"^%s*","") +-- str = gsub(str,"[\n\r]+ *","\n") +-- return str +-- end + +local newline = patterns.newline +local endofstring = patterns.endofstring +local whitespace = patterns.whitespace +local spacer = patterns.spacer + +local space = spacer^0 +local nospace = space/"" +local endofline = nospace * newline + +local stripend = (whitespace^1 * endofstring)/"" + +local normalline = (nospace * ((1-space*(newline+endofstring))^1) * nospace) + +local stripempty = endofline^1/"" +local normalempty = endofline^1 +local singleempty = endofline * (endofline^0/"") +local doubleempty = endofline * endofline^-1 * (endofline^0/"") + +local stripstart = stripempty^0 + +local p_prune_normal = Cs ( stripstart * ( stripend + normalline + normalempty )^0 ) +local p_prune_collapse = Cs ( stripstart * ( stripend + normalline + doubleempty )^0 ) +local p_prune_noempty = Cs ( stripstart * ( stripend + normalline + singleempty )^0 ) +local p_retain_normal = Cs ( ( normalline + normalempty )^0 ) +local p_retain_collapse = Cs ( ( normalline + doubleempty )^0 ) +local p_retain_noempty = Cs ( ( normalline + singleempty )^0 ) + +-- function striplines(str,prune,collapse,noempty) +-- if prune then +-- if noempty then +-- return lpegmatch(p_prune_noempty,str) or str +-- elseif collapse then +-- return lpegmatch(p_prune_collapse,str) or str +-- else +-- return lpegmatch(p_prune_normal,str) or str +-- end +-- else +-- if noempty then +-- return lpegmatch(p_retain_noempty,str) or str +-- elseif collapse then +-- return lpegmatch(p_retain_collapse,str) or str +-- else +-- return lpegmatch(p_retain_normal,str) or str +-- end +-- end +-- end + +local striplinepatterns = { + ["prune"] = p_prune_normal, + ["prune and collapse"] = p_prune_collapse, -- default + ["prune and no empty"] = p_prune_noempty, + ["retain"] = p_retain_normal, + ["retain and collapse"] = p_retain_collapse, + ["retain and no empty"] = p_retain_noempty, +} + +strings.striplinepatterns = striplinepatterns + +function strings.striplines(str,how) + return str and lpegmatch(how and striplinepatterns[how] or p_prune_collapse,str) or str end --- local template = string.striplong([[ +strings.striplong = strings.striplines -- for old times sake + +-- local str = table.concat( { +-- " ", +-- " aap", +-- " noot mies", +-- " ", +-- " ", +-- " zus wim jet", +-- "zus wim jet", +-- " zus wim jet", +-- " ", +-- }, "\n") + +-- local str = table.concat( { +-- " aaaa", +-- " bb", +-- " cccccc", +-- }, "\n") + +-- for k, v in table.sortedhash(utilities.strings.striplinepatterns) do +-- logs.report("stripper","method: %s, result: [[%s]]",k,utilities.strings.striplines(str,k)) +-- end + +-- inspect(strings.striplong([[ -- aaaa -- bb -- cccccc --- ]]) +-- ]])) function strings.nice(str) str = gsub(str,"[:%-+_]+"," ") -- maybe more @@ -418,7 +508,7 @@ local format_i = function(f) if f and f ~= "" then return format("format('%%%si',a%s)",f,n) else - return format("format('%%i',a%s)",n) + return format("format('%%i',a%s)",n) -- why not just tostring() end end @@ -434,6 +524,11 @@ local format_f = function(f) return format("format('%%%sf',a%s)",f,n) end +local format_F = function(f) + n = n + 1 + return format("((a%s == 0 and '0') or (a%s == 1 and '1') or format('%%%sf',a%s))",n,n,f,n) +end + local format_g = function(f) n = n + 1 return format("format('%%%sg',a%s)",f,n) @@ -707,7 +802,7 @@ local builder = Cs { "start", V("!") -- new + V("s") + V("q") + V("i") + V("d") - + V("f") + V("g") + V("G") + V("e") + V("E") + + V("f") + V("F") + V("g") + V("G") + V("e") + V("E") + V("x") + V("X") + V("o") -- + V("c") @@ -742,6 +837,7 @@ local builder = Cs { "start", ["i"] = (prefix_any * P("i")) / format_i, -- %i => regular %i (integer) ["d"] = (prefix_any * P("d")) / format_d, -- %d => regular %d (integer) ["f"] = (prefix_any * P("f")) / format_f, -- %f => regular %f (float) + ["F"] = (prefix_any * P("F")) / format_F, -- %F => regular %f (float) but 0/1 check ["g"] = (prefix_any * P("g")) / format_g, -- %g => regular %g (float) ["G"] = (prefix_any * P("G")) / format_G, -- %G => regular %G (float) ["e"] = (prefix_any * P("e")) / format_e, -- %e => regular %e (float) @@ -816,7 +912,8 @@ local function make(t,str) f = loadstripped(p)() else n = 0 - p = lpegmatch(builder,str,1,"..",t._extensions_) -- after this we know n + -- p = lpegmatch(builder,str,1,"..",t._extensions_) -- after this we know n + p = lpegmatch(builder,str,1,t._connector_,t._extensions_) -- after this we know n if n > 0 then p = format(template,preamble,t._preamble_,arguments[n],p) -- print("builder 2 >",p) @@ -875,22 +972,24 @@ strings.formatters = { } -- table (metatable) in which case we could better keep a count and -- clear that table when a threshold is reached +-- _connector_ is an experiment + if _LUAVERSION < 5.2 then - function strings.formatters.new() - local t = { _extensions_ = { }, _preamble_ = preamble, _environment_ = { }, _type_ = "formatter" } + function strings.formatters.new(noconcat) + local t = { _type_ = "formatter", _connector_ = noconcat and "," or "..", _extensions_ = { }, _preamble_ = preamble, _environment_ = { } } setmetatable(t, { __index = make, __call = use }) return t end else - function strings.formatters.new() + function strings.formatters.new(noconcat) local e = { } -- better make a copy as we can overload for k, v in next, environment do e[k] = v end - local t = { _extensions_ = { }, _preamble_ = "", _environment_ = e, _type_ = "formatter" } + local t = { _type_ = "formatter", _connector_ = noconcat and "," or "..", _extensions_ = { }, _preamble_ = "", _environment_ = e } setmetatable(t, { __index = make, __call = use }) return t end diff --git a/tex/context/base/util-tab.lua b/tex/context/base/util-tab.lua index d235520c4..f9e9b318d 100644 --- a/tex/context/base/util-tab.lua +++ b/tex/context/base/util-tab.lua @@ -21,27 +21,29 @@ local utftoeight = utf.toeight local splitter = lpeg.tsplitat(".") -function tables.definetable(target,nofirst,nolast) -- defines undefined tables - local composed, shortcut, t = nil, nil, { } +function utilities.tables.definetable(target,nofirst,nolast) -- defines undefined tables + local composed, t = nil, { } local snippets = lpegmatch(splitter,target) for i=1,#snippets - (nolast and 1 or 0) do local name = snippets[i] if composed then - composed = shortcut .. "." .. name - shortcut = shortcut .. "_" .. name - t[#t+1] = formatters["local %s = %s if not %s then %s = { } %s = %s end"](shortcut,composed,shortcut,shortcut,composed,shortcut) + composed = composed .. "." .. name + t[#t+1] = formatters["if not %s then %s = { } end"](composed,composed) else composed = name - shortcut = name if not nofirst then t[#t+1] = formatters["%s = %s or { }"](composed,composed) end end end - if nolast then - composed = shortcut .. "." .. snippets[#snippets] + if composed then + if nolast then + composed = composed .. "." .. snippets[#snippets] + end + return concat(t,"\n"), composed -- could be shortcut + else + return "", target end - return concat(t,"\n"), composed end -- local t = tables.definedtable("a","b","c","d") @@ -73,7 +75,7 @@ end function tables.migratetable(target,v,root) local t = root or _G - local names = string.split(target,".") + local names = lpegmatch(splitter,target) for i=1,#names-1 do local name = names[i] t[name] = t[name] or { } @@ -493,7 +495,8 @@ end -- The next version is somewhat faster, although in practice one will seldom -- serialize a lot using this one. Often the above variants are more efficient. --- If we would really need this a lot, we could hash q keys. +-- If we would really need this a lot, we could hash q keys, or just not used +-- indented code. -- char-def.lua : 0.53 -> 0.38 -- husayni.tma : 0.28 -> 0.19 diff --git a/tex/context/fonts/texgyre.lfg b/tex/context/fonts/texgyre.lfg index 7782aa509..785982037 100644 --- a/tex/context/fonts/texgyre.lfg +++ b/tex/context/fonts/texgyre.lfg @@ -26,5 +26,11 @@ return { "tgbonummath-regular.otf", "tgbonum-math.otf", }, + ["texgyre-schola-math-regular.otf"] = { + "texgyreschola-math.otf", -- beta + "texgyrescholamath-regular.otf", + "tgscholamath-regular.otf", + "tgschola-math.otf", + }, }, } diff --git a/tex/context/interface/cont-nl.xml b/tex/context/interface/cont-nl.xml index 685033f81..4bfad3798 100644 --- a/tex/context/interface/cont-nl.xml +++ b/tex/context/interface/cont-nl.xml @@ -6525,7 +6525,7 @@ <cd:parameter name="maxbreedte"> <cd:constant type="cd:dimension"/> </cd:parameter> - <cd:parameter name="onbekendeverwijzing"> + <cd:parameter name="onbekendereferentie"> <cd:constant type="leeg"/> <cd:constant type="geen"/> </cd:parameter> @@ -8996,7 +8996,7 @@ <cd:constant type="nee"/> <cd:constant type="geen"/> </cd:parameter> - <cd:parameter name="onbekendeverwijzing"> + <cd:parameter name="onbekendereferentie"> <cd:constant type="ja"/> <cd:constant type="leeg"/> <cd:constant type="nee"/> diff --git a/tex/context/interface/keys-nl.xml b/tex/context/interface/keys-nl.xml index 21536214a..c87088a09 100644 --- a/tex/context/interface/keys-nl.xml +++ b/tex/context/interface/keys-nl.xml @@ -950,7 +950,7 @@ <cd:constant name='reduction' value='reductie'/> <cd:constant name='ref' value='ref'/> <cd:constant name='refcommand' value='refcommand'/> - <cd:constant name='reference' value='verwijzing'/> + <cd:constant name='reference' value='referentie'/> <cd:constant name='referenceprefix' value='referenceprefix'/> <cd:constant name='referencing' value='refereren'/> <cd:constant name='region' value='gebied'/> @@ -1100,7 +1100,7 @@ <cd:constant name='totalnumber' value='totalnumber'/> <cd:constant name='type' value='type'/> <cd:constant name='unit' value='eenheid'/> - <cd:constant name='unknownreference' value='onbekendeverwijzing'/> + <cd:constant name='unknownreference' value='onbekendereferentie'/> <cd:constant name='urlalternative' value='urlvariant'/> <cd:constant name='urlspace' value='urlspatie'/> <cd:constant name='validate' value='valideer'/> diff --git a/tex/context/patterns/lang-it.lua b/tex/context/patterns/lang-it.lua index 20ab48fbf..fb6a9d893 100644 --- a/tex/context/patterns/lang-it.lua +++ b/tex/context/patterns/lang-it.lua @@ -38,7 +38,7 @@ return { %\ % This work consists of the single file hyph-it.tex.\ %\ -% \\versionnumber{4.8i} \\versiondate{2011/08/16}\ +% \\versionnumber{4.9} \\versiondate{2014/04/22}\ %\ % These hyphenation patterns for the Italian language are supposed to comply\ % with the Recommendation UNI 6461 on hyphenation issued by the Italian\ @@ -47,6 +47,7 @@ return { % liability is disclaimed.\ %\ % ChangeLog:\ +% - 2014-04-22 - Add few pattherns involving `h'\ % - 2011-08-16 - Change the licence from GNU LGPL into LPPL v1.3.\ % - 2010-05-24 - Fix for Italian patterns for proper hyphenation of -ich and Ljubljana.\ % - 2008-06-09 - Import of original ithyph.tex into hyph-utf8 package.\ @@ -56,11 +57,11 @@ return { }, ["patterns"]={ ["characters"]="'abcdefghijklmnopqrstuvwxyz’", - ["data"]=".a3p2n .anti1 .anti3m2n .bio1 .ca4p3s .circu2m1 .contro1 .di2s3cine .e2x1eu .fran2k3 .free3 .li3p2sa .narco1 .opto1 .orto3p2 .para1 .poli3p2 .pre1 .p2s .re1i2scr .sha2re3 .tran2s3c .tran2s3d .tran2s3l .tran2s3n .tran2s3p .tran2s3r .tran2s3t .su2b3lu .su2b3r .wa2g3n .wel2t1 2'2 2’2 a1ia a1ie a1io a1iu a1uo a1ya 2at. e1iu e2w o1ia o1ie o1io o1iu 1b 2bb 2bc 2bd 2bf 2bm 2bn 2bp 2bs 2bt 2bv b2l b2r 2b. 2b' 2b’ 1c 2cb 2cc 2cd 2cf 2ck 2cm 2cn 2cq 2cs 2ct 2cz 2chh c2h 2ch. 2ch'. 2ch’. 2ch''. 2ch’’. 2chb ch2r 2chn c2l c2r 2c. 2c' 2c’ .c2 1d 2db 2dd 2dg 2dl 2dm 2dn 2dp d2r 2ds 2dt 2dv 2dw 2d. 2d' 2d’ .d2 1f 2fb 2fg 2ff 2fn f2l f2r 2fs 2ft 2f. 2f' 2f’ 1g 2gb 2gd 2gf 2gg g2h g2l 2gm g2n 2gp g2r 2gs 2gt 2gv 2gw 2gz 2gh2t 2g. 2g' 2g’ 1h 2hb 2hd 2hh hi3p2n h2l 2hm 2hn 2hr 2hv 2h. 2h' 2h’ 1j 2j. 2j' 2j’ 1k 2kg 2kf k2h 2kk k2l 2km k2r 2ks 2kt 2k. 2k' 2k’ 1l 2lb 2lc 2ld 2l3f2 2lg l2h l2j 2lk 2ll 2lm 2ln 2lp 2lq 2lr 2ls 2lt 2lv 2lw 2lz 2l. 2l'. 2l’. 2l'' 2l’’ 1m 2mb 2mc 2mf 2ml 2mm 2mn 2mp 2mq 2mr 2ms 2mt 2mv 2mw 2m. 2m' 2m’ 1n 2nb 2nc 2nd 2nf 2ng 2nk 2nl 2nm 2nn 2np 2nq 2nr 2ns n2s3fer 2nt 2nv 2nz n2g3n 2nheit 2n. 2n' 2n’ 1p 2pd p2h p2l 2pn 3p2ne 2pp p2r 2ps 3p2sic 2pt 2pz 2p. 2p' 2p’ 1q 2qq 2q. 2q' 2q’ 1r 2rb 2rc 2rd 2rf r2h 2rg 2rk 2rl 2rm 2rn 2rp 2rq 2rr 2rs 2rt r2t2s3 2rv 2rx 2rw 2rz 2r. 2r' 2r’ 1s2 2shm 2sh. 2sh' 2sh’ 2s3s s4s3m 2s3p2n 2stb 2stc 2std 2stf 2stg 2stm 2stn 2stp 2sts 2stt 2stv 2sz 4s. 4s'. 4s’. 4s'' 4s’’ 1t 2tb 2tc 2td 2tf 2tg t2h t2l 2tm 2tn 2tp t2r t2s 3t2sch 2tt t2t3s 2tv 2tw t2z 2tzk tz2s 2t. 2t'. 2t’. 2t'' 2t’’ 1v 2vc v2l v2r 2vv 2v. 2v'. 2v’. 2v'' 2v’’ 1w w2h wa2r 2w1y 2w. 2w' 2w’ 1x 2xb 2xc 2xf 2xh 2xm 2xp 2xt 2xw 2x. 2x' 2x’ y1ou y1i 1z 2zb 2zd 2zl 2zn 2zp 2zt 2zs 2zv 2zz 2z. 2z'. 2z’. 2z'' 2z’’ .z2", - ["length"]=1806, + ["data"]=".a3p2n .anti1 .anti3m2n .bio1 .ca4p3s .circu2m1 .contro1 .di2s3cine .e2x1eu .fran2k3 .free3 .li3p2sa .narco1 .opto1 .orto3p2 .para1 .ph2l .ph2r .poli3p2 .pre1 .p2s .re1i2scr .sha2re3 .tran2s3c .tran2s3d .tran2s3l .tran2s3n .tran2s3p .tran2s3r .tran2s3t .su2b3lu .su2b3r .wa2g3n .wel2t1 2'2 2’2 a1ia a1ie a1io a1iu a1uo a1ya 2at. e1iu e2w o1ia o1ie o1io o1iu 1b 2bb 2bc 2bd 2bf 2bm 2bn 2bp 2bs 2bt 2bv b2l b2r 2b. 2b' 2b’ 1c 2cb 2cc 2cd 2cf 2ck 2cm 2cn 2cq 2cs 2ct 2cz 2chh c2h 2ch. 2ch'. 2ch’. 2ch''. 2ch’’. 2chb ch2r 2chn c2l c2r 2c. 2c' 2c’ .c2 1d 2db 2dd 2dg 2dl 2dm 2dn 2dp d2r 2ds 2dt 2dv 2dw 2d. 2d' 2d’ .d2 1f 2fb 2fg 2ff 2fn f2l f2r 2fs 2ft 2f. 2f' 2f’ 1g 2gb 2gd 2gf 2gg g2h g2l 2gm g2n 2gp g2r 2gs 2gt 2gv 2gw 2gz 2gh2t 2g. 2g' 2g’ .h2 1h 2hb 2hd 2hh hi3p2n h2l 2hm 2hn 2hr 2hv 2h. 2h' 2h’ .j2 1j 2j. 2j' 2j’ .k2 1k 2kg 2kf k2h 2kk k2l 2km k2r 2ks 2kt 2k. 2k' 2k’ 1l 2lb 2lc 2ld 2l3f2 2lg l2h l2j 2lk 2ll 2lm 2ln 2lp 2lq 2lr 2ls 2lt 2lv 2lw 2lz 2l. 2l'. 2l’. 2l'' 2l’’ 1m 2mb 2mc 2mf 2ml 2mm 2mn 2mp 2mq 2mr 2ms 2mt 2mv 2mw 2m. 2m' 2m’ 1n 2nb 2nc 2nd 2nf 2ng 2nk 2nl 2nm 2nn 2np 2nq 2nr 2ns n2s3fer 2nt 2nv 2nz n2g3n 2nheit 2n. 2n' 2n’ 1p 2pd p2h p2l 2pn 3p2ne 2pp p2r 2ps 3p2sic 2pt 2pz 2p. 2p' 2p’ 1q 2qq 2q. 2q' 2q’ 1r 2rb 2rc 2rd 2rf r2h 2rg 2rk 2rl 2rm 2rn 2rp 2rq 2rr 2rs 2rt r2t2s3 2rv 2rx 2rw 2rz 2r. 2r' 2r’ 1s2 2shm 2sh. 2sh' 2sh’ 2s3s s4s3m 2s3p2n 2stb 2stc 2std 2stf 2stg 2stm 2stn 2stp 2sts 2stt 2stv 2sz 4s. 4s'. 4s’. 4s'' 4s’’ .t2 1t 2tb 2tc 2td 2tf 2tg t2h 2th. t2l 2tm 2tn 2tp t2r t2s 3t2sch 2tt t2t3s 2tv 2tw t2z 2tzk tz2s 2t. 2t'. 2t’. 2t'' 2t’’ 1v 2vc v2l v2r 2vv 2v. 2v'. 2v’. 2v'' 2v’’ 1w w2h wa2r 2w1y 2w. 2w' 2w’ 1x 2xb 2xc 2xf 2xh 2xm 2xp 2xt 2xw 2x. 2x' 2x’ y1ou y1i 1z 2zb 2zd 2zl 2zn 2zp 2zt 2zs 2zv 2zz 2z. 2z'. 2z’. 2z'' 2z’’ .z2", + ["length"]=1839, ["minhyphenmax"]=1, ["minhyphenmin"]=1, - ["n"]=377, + ["n"]=384, }, ["version"]="1.001", }
\ No newline at end of file diff --git a/tex/context/patterns/lang-it.pat b/tex/context/patterns/lang-it.pat index 78a127aa7..12a9edf33 100644 --- a/tex/context/patterns/lang-it.pat +++ b/tex/context/patterns/lang-it.pat @@ -21,6 +21,8 @@ .opto1 .orto3p2 .para1 +.ph2l +.ph2r .poli3p2 .pre1 .p2s @@ -137,6 +139,7 @@ g2r 2gh2t 2g. 2g' +.h2 1h 2hb 2hd @@ -149,9 +152,11 @@ h2l 2hv 2h. 2h' +.j2 1j 2j. 2j' +.k2 1k 2kg 2kf @@ -288,6 +293,7 @@ s4s3m 4s. 4s'. 4s'' +.t2 1t 2tb 2tc @@ -295,6 +301,7 @@ s4s3m 2tf 2tg t2h +2th. t2l 2tm 2tn diff --git a/tex/context/patterns/lang-it.rme b/tex/context/patterns/lang-it.rme index 6cfe6896a..2a2fb60d5 100644 --- a/tex/context/patterns/lang-it.rme +++ b/tex/context/patterns/lang-it.rme @@ -32,7 +32,7 @@ Italian hyphenation patterns % % This work consists of the single file hyph-it.tex. % -% \versionnumber{4.8i} \versiondate{2011/08/16} +% \versionnumber{4.9} \versiondate{2014/04/22} % % These hyphenation patterns for the Italian language are supposed to comply % with the Recommendation UNI 6461 on hyphenation issued by the Italian @@ -41,6 +41,7 @@ Italian hyphenation patterns % liability is disclaimed. % % ChangeLog: +% - 2014-04-22 - Add few pattherns involving `h' % - 2011-08-16 - Change the licence from GNU LGPL into LPPL v1.3. % - 2010-05-24 - Fix for Italian patterns for proper hyphenation of -ich and Ljubljana. % - 2008-06-09 - Import of original ithyph.tex into hyph-utf8 package. diff --git a/tex/context/sample/cervantes-es.tex b/tex/context/sample/cervantes-es.tex new file mode 100644 index 000000000..153797023 --- /dev/null +++ b/tex/context/sample/cervantes-es.tex @@ -0,0 +1,6 @@ +En un lugar de la Mancha, de cuyo nombre no quiero acordar-me, no ha +mucho tiempo que vivía un hidalgo de los de lanza en astillero, adarga +antigua, rocín flaco y galgo corredor. Una olla de algo más vaca que +carnero, salpicón las más noches, duelos y quebrantos los sábados, +lantejas los viernes, algún palomino de añadidura los domingos, +consumían las tres partes de su hacienda. diff --git a/tex/context/sample/quevedo-es.tex b/tex/context/sample/quevedo-es.tex new file mode 100644 index 000000000..166b0328f --- /dev/null +++ b/tex/context/sample/quevedo-es.tex @@ -0,0 +1,19 @@ +\startlines +Un soneto me manda hacer Violante +que en mi vida me he visto en tanto aprieto; +catorce versos dicen que es soneto; +burla burlando van los tres delante. + +Yo pensé que no hallara consonante, +y estoy a la mitad de otro cuarteto; +mas si me veo en el primer terceto, +no hay cosa en los cuartetos que me espante. + +Por el primer terceto voy entrando, +y parece que entré con pie derecho, +pues fin con este verso le voy dando. + +Ya estoy en el segundo, y aun sospecho +que voy los trece versos acabando; +contad si son catorce, y está hecho. +\stoplines diff --git a/tex/generic/context/luatex/luatex-basics-gen.lua b/tex/generic/context/luatex/luatex-basics-gen.lua index 9cf5b9317..a304ab6aa 100644 --- a/tex/generic/context/luatex/luatex-basics-gen.lua +++ b/tex/generic/context/luatex/luatex-basics-gen.lua @@ -254,6 +254,18 @@ function caches.loaddata(paths,name) for i=1,#paths do local data = false local luaname, lucname = makefullname(paths[i],name) + if lucname and not lfs.isfile(lucname) and type(caches.compile) == "function" then + -- in case we used luatex and luajittex mixed ... lub or luc file + texio.write(string.format("(compiling luc: %s)",lucname)) + data = loadfile(luaname) + if data then + data = data() + end + if data then + caches.compile(data,luaname,lucname) + return data + end + end if lucname and lfs.isfile(lucname) then -- maybe also check for size texio.write(string.format("(load luc: %s)",lucname)) data = loadfile(lucname) diff --git a/tex/generic/context/luatex/luatex-basics-nod.lua b/tex/generic/context/luatex/luatex-basics-nod.lua index 50af40193..373dab5a8 100644 --- a/tex/generic/context/luatex/luatex-basics-nod.lua +++ b/tex/generic/context/luatex/luatex-basics-nod.lua @@ -54,7 +54,7 @@ nodes.handlers = { } local nodecodes = { } for k,v in next, node.types () do nodecodes[string.gsub(v,"_","")] = k end local whatcodes = { } for k,v in next, node.whatsits() do whatcodes[string.gsub(v,"_","")] = k end local glyphcodes = { [0] = "character", "glyph", "ligature", "ghost", "left", "right" } -local disccodes = { [0] = "discretionary","explicit", "automatic", "regular", "first", "second" } +local disccodes = { [0] = "discretionary", "explicit", "automatic", "regular", "first", "second" } nodes.nodecodes = nodecodes nodes.whatcodes = whatcodes @@ -67,11 +67,20 @@ local remove_node = node.remove local new_node = node.new local traverse_id = node.traverse_id -local math_code = nodecodes.math - nodes.handlers.protectglyphs = node.protect_glyphs nodes.handlers.unprotectglyphs = node.unprotect_glyphs +local math_code = nodecodes.math +local end_of_math = node.end_of_math + +function node.end_of_math(n) + if n.id == math_code and n.subtype == 1 then + return n + else + return end_of_math(n) + end +end + function nodes.remove(head, current, free_too) local t = current head, current = remove_node(head,current) diff --git a/tex/generic/context/luatex/luatex-fonts-inj.lua b/tex/generic/context/luatex/luatex-fonts-inj.lua index ae48150a6..5e6c07070 100644 --- a/tex/generic/context/luatex/luatex-fonts-inj.lua +++ b/tex/generic/context/luatex/luatex-fonts-inj.lua @@ -11,8 +11,6 @@ if not modules then modules = { } end modules ['node-inj'] = { -- test fonts. Btw, future versions of luatex will have extended glyph properties -- that can be of help. Some optimizations can go away when we have faster machines. --- todo: make a special one for context - local next = next local utfchar = utf.char @@ -108,9 +106,9 @@ function injections.setkern(current,factor,rlmode,x,tfmchr) end end -function injections.setmark(start,base,factor,rlmode,ba,ma,index,baseismark) -- ba=baseanchor, ma=markanchor - local dx, dy = factor*(ba[1]-ma[1]), factor*(ba[2]-ma[2]) -- the index argument is no longer used but when this - local bound = base[a_markbase] -- fails again we should pass it +function injections.setmark(start,base,factor,rlmode,ba,ma) -- ba=baseanchor, ma=markanchor + local dx, dy = factor*(ba[1]-ma[1]), factor*(ba[2]-ma[2]) + local bound = base[a_markbase] local index = 1 if bound then local mb = marks[bound] @@ -125,13 +123,12 @@ function injections.setmark(start,base,factor,rlmode,ba,ma,index,baseismark) -- report_injections("possible problem, %U is base mark without data (id %a)",base.char,bound) end end --- index = index or 1 index = index or 1 bound = #marks + 1 base[a_markbase] = bound start[a_markmark] = bound start[a_markdone] = index - marks[bound] = { [index] = { dx, dy, rlmode, baseismark } } + marks[bound] = { [index] = { dx, dy, rlmode } } return dx, dy, bound end diff --git a/tex/generic/context/luatex/luatex-fonts-merged.lua b/tex/generic/context/luatex/luatex-fonts-merged.lua index 3f408b96f..dd9868626 100644 --- a/tex/generic/context/luatex/luatex-fonts-merged.lua +++ b/tex/generic/context/luatex/luatex-fonts-merged.lua @@ -1,6 +1,6 @@ -- merged file : luatex-fonts-merged.lua -- parent file : luatex-fonts.lua --- merge date : 02/14/14 17:07:59 +-- merge date : 04/28/14 23:24:10 do -- begin closure to overcome local limits and interference @@ -901,6 +901,36 @@ local function sortedkeys(tab) return {} end end +local function sortedhashonly(tab) + if tab then + local srt,s={},0 + for key,_ in next,tab do + if type(key)=="string" then + s=s+1 + srt[s]=key + end + end + sort(srt) + return srt + else + return {} + end +end +local function sortedindexonly(tab) + if tab then + local srt,s={},0 + for key,_ in next,tab do + if type(key)=="number" then + s=s+1 + srt[s]=key + end + end + sort(srt) + return srt + else + return {} + end +end local function sortedhashkeys(tab,cmp) if tab then local srt,s={},0 @@ -926,6 +956,8 @@ function table.allkeys(t) return sortedkeys(keys) end table.sortedkeys=sortedkeys +table.sortedhashonly=sortedhashonly +table.sortedindexonly=sortedindexonly table.sortedhashkeys=sortedhashkeys local function nothing() end local function sortedhash(t,cmp) @@ -1723,7 +1755,7 @@ local byte,find,gsub,format=string.byte,string.find,string.gsub,string.format local concat=table.concat local floor=math.floor local type=type -if string.find(os.getenv("PATH"),";") then +if string.find(os.getenv("PATH"),";",1,true) then io.fileseparator,io.pathseparator="\\",";" else io.fileseparator,io.pathseparator="/",":" @@ -2542,9 +2574,11 @@ end if not number then number={} end local stripper=patterns.stripzeros local function points(n) + n=tonumber(n) return (not n or n==0) and "0pt" or lpegmatch(stripper,format("%.5fpt",n/65536)) end local function basepoints(n) + n=tonumber(n) return (not n or n==0) and "0bp" or lpegmatch(stripper,format("%.5fbp",n*(7200/7227)/65536)) end number.points=points @@ -2607,11 +2641,39 @@ local pattern=Carg(1)/function(t) function strings.tabtospace(str,tab) return lpegmatch(pattern,str,1,tab or 7) end -function strings.striplong(str) - str=gsub(str,"^%s*","") - str=gsub(str,"[\n\r]+ *","\n") - return str +local newline=patterns.newline +local endofstring=patterns.endofstring +local whitespace=patterns.whitespace +local spacer=patterns.spacer +local space=spacer^0 +local nospace=space/"" +local endofline=nospace*newline +local stripend=(whitespace^1*endofstring)/"" +local normalline=(nospace*((1-space*(newline+endofstring))^1)*nospace) +local stripempty=endofline^1/"" +local normalempty=endofline^1 +local singleempty=endofline*(endofline^0/"") +local doubleempty=endofline*endofline^-1*(endofline^0/"") +local stripstart=stripempty^0 +local p_prune_normal=Cs (stripstart*(stripend+normalline+normalempty )^0 ) +local p_prune_collapse=Cs (stripstart*(stripend+normalline+doubleempty )^0 ) +local p_prune_noempty=Cs (stripstart*(stripend+normalline+singleempty )^0 ) +local p_retain_normal=Cs ((normalline+normalempty )^0 ) +local p_retain_collapse=Cs ((normalline+doubleempty )^0 ) +local p_retain_noempty=Cs ((normalline+singleempty )^0 ) +local striplinepatterns={ + ["prune"]=p_prune_normal, + ["prune and collapse"]=p_prune_collapse, + ["prune and no empty"]=p_prune_noempty, + ["retain"]=p_retain_normal, + ["retain and collapse"]=p_retain_collapse, + ["retain and no empty"]=p_retain_noempty, +} +strings.striplinepatterns=striplinepatterns +function strings.striplines(str,how) + return str and lpegmatch(how and striplinepatterns[how] or p_prune_collapse,str) or str end +strings.striplong=strings.striplines function strings.nice(str) str=gsub(str,"[:%-+_]+"," ") return str @@ -2777,7 +2839,7 @@ local format_i=function(f) if f and f~="" then return format("format('%%%si',a%s)",f,n) else - return format("format('%%i',a%s)",n) + return format("format('%%i',a%s)",n) end end local format_d=format_i @@ -2789,6 +2851,10 @@ local format_f=function(f) n=n+1 return format("format('%%%sf',a%s)",f,n) end +local format_F=function(f) + n=n+1 + return format("((a%s == 0 and '0') or (a%s == 1 and '1') or format('%%%sf',a%s))",n,n,f,n) +end local format_g=function(f) n=n+1 return format("format('%%%sg',a%s)",f,n) @@ -3003,7 +3069,7 @@ local builder=Cs { "start", ( P("%")/""*( V("!") -+V("s")+V("q")+V("i")+V("d")+V("f")+V("g")+V("G")+V("e")+V("E")+V("x")+V("X")+V("o") ++V("s")+V("q")+V("i")+V("d")+V("f")+V("F")+V("g")+V("G")+V("e")+V("E")+V("x")+V("X")+V("o") +V("c")+V("C")+V("S") +V("Q") +V("N") @@ -3023,6 +3089,7 @@ local builder=Cs { "start", ["i"]=(prefix_any*P("i"))/format_i, ["d"]=(prefix_any*P("d"))/format_d, ["f"]=(prefix_any*P("f"))/format_f, + ["F"]=(prefix_any*P("F"))/format_F, ["g"]=(prefix_any*P("g"))/format_g, ["G"]=(prefix_any*P("G"))/format_G, ["e"]=(prefix_any*P("e"))/format_e, @@ -3070,7 +3137,7 @@ local function make(t,str) f=loadstripped(p)() else n=0 - p=lpegmatch(builder,str,1,"..",t._extensions_) + p=lpegmatch(builder,str,1,t._connector_,t._extensions_) if n>0 then p=format(template,preamble,t._preamble_,arguments[n],p) f=loadstripped(p,t._environment_)() @@ -3086,18 +3153,18 @@ local function use(t,fmt,...) end strings.formatters={} if _LUAVERSION<5.2 then - function strings.formatters.new() - local t={ _extensions_={},_preamble_=preamble,_environment_={},_type_="formatter" } + function strings.formatters.new(noconcat) + local t={ _type_="formatter",_connector_=noconcat and "," or "..",_extensions_={},_preamble_=preamble,_environment_={} } setmetatable(t,{ __index=make,__call=use }) return t end else - function strings.formatters.new() + function strings.formatters.new(noconcat) local e={} for k,v in next,environment do e[k]=v end - local t={ _extensions_={},_preamble_="",_environment_=e,_type_="formatter" } + local t={ _type_="formatter",_connector_=noconcat and "," or "..",_extensions_={},_preamble_="",_environment_=e } setmetatable(t,{ __index=make,__call=use }) return t end @@ -3327,6 +3394,17 @@ function caches.loaddata(paths,name) for i=1,#paths do local data=false local luaname,lucname=makefullname(paths[i],name) + if lucname and not lfs.isfile(lucname) and type(caches.compile)=="function" then + texio.write(string.format("(compiling luc: %s)",lucname)) + data=loadfile(luaname) + if data then + data=data() + end + if data then + caches.compile(data,luaname,lucname) + return data + end + end if lucname and lfs.isfile(lucname) then texio.write(string.format("(load luc: %s)",lucname)) data=loadfile(lucname) @@ -3550,9 +3628,17 @@ local free_node=node.free local remove_node=node.remove local new_node=node.new local traverse_id=node.traverse_id -local math_code=nodecodes.math nodes.handlers.protectglyphs=node.protect_glyphs nodes.handlers.unprotectglyphs=node.unprotect_glyphs +local math_code=nodecodes.math +local end_of_math=node.end_of_math +function node.end_of_math(n) + if n.id==math_code and n.subtype==1 then + return n + else + return end_of_math(n) + end +end function nodes.remove(head,current,free_too) local t=current head,current=remove_node(head,current) @@ -3846,14 +3932,15 @@ constructors.sharefonts=false constructors.nofsharedfonts=0 local sharednames={} function constructors.trytosharefont(target,tfmdata) - if constructors.sharefonts then + if constructors.sharefonts then local characters=target.characters local n=1 local t={ target.psname } local u=sortedkeys(characters) for i=1,#u do + local k=u[i] n=n+1;t[n]=k - n=n+1;t[n]=characters[u[i]].index or k + n=n+1;t[n]=characters[k].index or k end local h=md5.HEX(concat(t," ")) local s=sharednames[h] @@ -5697,7 +5784,6 @@ unify=function(data,filename) if unicode then krn[unicode]=kern else - print(unicode,name) end end description.kerns=krn @@ -6504,7 +6590,7 @@ local report_otf=logs.reporter("fonts","otf loading") local fonts=fonts local otf=fonts.handlers.otf otf.glists={ "gsub","gpos" } -otf.version=2.751 +otf.version=2.755 otf.cache=containers.define("fonts","otf",otf.version,true) local fontdata=fonts.hashes.identifiers local chardata=characters and characters.data @@ -7047,15 +7133,22 @@ actions["prepare glyphs"]=function(data,filename,raw) local glyph=cidglyphs[index] if glyph then local unicode=glyph.unicode +if unicode>=0x00E000 and unicode<=0x00F8FF then + unicode=-1 +elseif unicode>=0x0F0000 and unicode<=0x0FFFFD then + unicode=-1 +elseif unicode>=0x100000 and unicode<=0x10FFFD then + unicode=-1 +end local name=glyph.name or cidnames[index] - if not unicode or unicode==-1 or unicode>=criterium then + if not unicode or unicode==-1 then unicode=cidunicodes[index] end if unicode and descriptions[unicode] then report_otf("preventing glyph %a at index %H to overload unicode %U",name or "noname",index,unicode) unicode=-1 end - if not unicode or unicode==-1 or unicode>=criterium then + if not unicode or unicode==-1 then if not name then name=format("u%06X",private) end @@ -7101,7 +7194,7 @@ actions["prepare glyphs"]=function(data,filename,raw) if glyph then local unicode=glyph.unicode local name=glyph.name - if not unicode or unicode==-1 or unicode>=criterium then + if not unicode or unicode==-1 then unicode=private unicodes[name]=private if trace_private then @@ -7156,47 +7249,43 @@ actions["check encoding"]=function(data,filename,raw) local unicodetoindex=mapdata and mapdata.map or {} local indextounicode=mapdata and mapdata.backmap or {} local encname=lower(data.enc_name or mapdata.enc_name or "") - local criterium=0xFFFF + local criterium=0xFFFF + local privateoffset=constructors.privateoffset if find(encname,"unicode") then if trace_loading then report_otf("checking embedded unicode map %a",encname) end - local hash={} - for index,unicode in next,indices do - hash[index]=descriptions[unicode] - end - local reported={} - for unicode,index in next,unicodetoindex do - if not descriptions[unicode] then - local d=hash[index] + local reported={} + for maybeunicode,index in next,unicodetoindex do + if descriptions[maybeunicode] then + else + local unicode=indices[index] + if not unicode then + elseif maybeunicode==unicode then + elseif unicode>privateoffset then + else + local d=descriptions[unicode] if d then - if d.unicode~=unicode then - local c=d.copies - if c then - c[unicode]=true - else - d.copies={ [unicode]=true } - end + local c=d.copies + if c then + c[maybeunicode]=true + else + d.copies={ [maybeunicode]=true } end - elseif not reported[i] then + elseif index and not reported[index] then report_otf("missing index %i",index) - reported[i]=true + reported[index]=true end end end - for index,data in next,hash do - data.copies=sortedkeys(data.copies) - end - for index,unicode in next,indices do - local description=hash[index] - local copies=description.copies - if copies then - duplicates[unicode]=copies - description.copies=nil - else - report_otf("copies but no unicode parent %U",unicode) - end + end + for unicode,data in next,descriptions do + local d=data.copies + if d then + duplicates[unicode]=sortedkeys(d) + data.copies=nil end + end elseif properties.cidinfo then report_otf("warning: no unicode map, used cidmap %a",properties.cidinfo.usedname) else @@ -7238,7 +7327,7 @@ actions["add duplicates"]=function(data,filename,raw) end end end - if u>0 then + if u>0 then local duplicate=table.copy(description) duplicate.comment=format("copy of U+%05X",unicode) descriptions[u]=duplicate @@ -7440,10 +7529,16 @@ actions["reorganize subtables"]=function(data,filename,raw) report_otf("skipping weird lookup number %s",k) elseif features then local f={} + local o={} for i=1,#features do local df=features[i] local tag=strip(lower(df.tag)) - local ft=f[tag] if not ft then ft={} f[tag]=ft end + local ft=f[tag] + if not ft then + ft={} + f[tag]=ft + o[#o+1]=tag + end local dscripts=df.scripts for i=1,#dscripts do local d=dscripts[i] @@ -7463,6 +7558,7 @@ actions["reorganize subtables"]=function(data,filename,raw) subtables=subtables, markclass=markclass, features=f, + order=o, } else lookups[name]={ @@ -8908,8 +9004,9 @@ basemethods.shared={ basemethod="independent" local function featuresinitializer(tfmdata,value) if true then - local t=trace_preparing and os.clock() + local starttime=trace_preparing and os.clock() local features=tfmdata.shared.features + local fullname=trace_preparing and tfmdata.properties.fullname if features then applybasemethod("initializehashes",tfmdata) local collectlookups=otf.collectlookups @@ -8919,26 +9016,34 @@ local function featuresinitializer(tfmdata,value) local language=properties.language local basesubstitutions=rawdata.resources.features.gsub local basepositionings=rawdata.resources.features.gpos - if basesubstitutions then - for feature,data in next,basesubstitutions do - local value=features[feature] - if value then - local validlookups,lookuplist=collectlookups(rawdata,feature,script,language) - if validlookups then - applybasemethod("preparesubstitutions",tfmdata,feature,value,validlookups,lookuplist) - registerbasefeature(feature,value) - end - end - end - end - if basepositionings then - for feature,data in next,basepositionings do - local value=features[feature] - if value then - local validlookups,lookuplist=collectlookups(rawdata,feature,script,language) - if validlookups then - applybasemethod("preparepositionings",tfmdata,feature,features[feature],validlookups,lookuplist) - registerbasefeature(feature,value) + if basesubstitutions or basepositionings then + local sequences=tfmdata.resources.sequences + for s=1,#sequences do + local sequence=sequences[s] + local sfeatures=sequence.features + if sfeatures then + local order=sequence.order + if order then + for i=1,#order do + local feature=order[i] + if features[feature] then + local validlookups,lookuplist=collectlookups(rawdata,feature,script,language) + if not validlookups then + elseif basesubstitutions and basesubstitutions[feature] then + if trace_preparing then + report_prepare("filtering base feature %a for %a",feature,fullname) + end + applybasemethod("preparesubstitutions",tfmdata,feature,value,validlookups,lookuplist) + registerbasefeature(feature,value) + elseif basepositionings and basepositionings[feature] then + if trace_preparing then + report_prepare("filtering base feature %a for %a",feature,fullname) + end + applybasemethod("preparepositionings",tfmdata,feature,features[feature],validlookups,lookuplist) + registerbasefeature(feature,value) + end + end + end end end end @@ -8946,7 +9051,7 @@ local function featuresinitializer(tfmdata,value) registerbasehash(tfmdata) end if trace_preparing then - report_prepare("preparation time is %0.3f seconds for %a",os.clock()-t,tfmdata.properties.fullname) + report_prepare("preparation time is %0.3f seconds for %a",os.clock()-starttime,fullname) end end end @@ -9042,9 +9147,9 @@ function injections.setkern(current,factor,rlmode,x,tfmchr) return 0,0 end end -function injections.setmark(start,base,factor,rlmode,ba,ma,index,baseismark) - local dx,dy=factor*(ba[1]-ma[1]),factor*(ba[2]-ma[2]) - local bound=base[a_markbase] +function injections.setmark(start,base,factor,rlmode,ba,ma) + local dx,dy=factor*(ba[1]-ma[1]),factor*(ba[2]-ma[2]) + local bound=base[a_markbase] local index=1 if bound then local mb=marks[bound] @@ -9063,7 +9168,7 @@ function injections.setmark(start,base,factor,rlmode,ba,ma,index,baseismark) base[a_markbase]=bound start[a_markmark]=bound start[a_markdone]=index - marks[bound]={ [index]={ dx,dy,rlmode,baseismark } } + marks[bound]={ [index]={ dx,dy,rlmode } } return dx,dy,bound end local function dir(n) @@ -11413,14 +11518,20 @@ local autofeatures=fonts.analyzers.features local function initialize(sequence,script,language,enabled) local features=sequence.features if features then - for kind,scripts in next,features do - local valid=enabled[kind] - if valid then - local languages=scripts[script] or scripts[wildcard] - if languages and (languages[language] or languages[wildcard]) then - return { valid,autofeatures[kind] or false,sequence.chain or 0,kind,sequence } + local order=sequence.order + if order then + for i=1,#order do + local kind=order[i] + local valid=enabled[kind] + if valid then + local scripts=features[kind] + local languages=scripts[script] or scripts[wildcard] + if languages and (languages[language] or languages[wildcard]) then + return { valid,autofeatures[kind] or false,sequence.chain or 0,kind,sequence } + end end end + else end end return false @@ -11447,12 +11558,12 @@ function otf.dataset(tfmdata,font) } rs[language]=rl local sequences=tfmdata.resources.sequences -for s=1,#sequences do - local v=enabled and initialize(sequences[s],script,language,enabled) - if v then - rl[#rl+1]=v - end -end + for s=1,#sequences do + local v=enabled and initialize(sequences[s],script,language,enabled) + if v then + rl[#rl+1]=v + end + end end return rl end @@ -12479,6 +12590,14 @@ local function packdata(data) features[script]=pack_normal(feature) end end + local order=sequence.order + if order then + sequence.order=pack_indexed(order) + end + local markclass=sequence.markclass + if markclass then + sequence.markclass=pack_boolean(markclass) + end end end local lookups=resources.lookups @@ -12891,6 +13010,20 @@ local function unpackdata(data) end end end + local order=feature.order + if order then + local tv=tables[order] + if tv then + feature.order=tv + end + end + local markclass=feature.markclass + if markclass then + local tv=tables[markclass] + if tv then + feature.markclass=tv + end + end end end local lookups=resources.lookups diff --git a/tex/generic/context/luatex/luatex-fonts-otn.lua b/tex/generic/context/luatex/luatex-fonts-otn.lua index c57be5f02..068f0a9b9 100644 --- a/tex/generic/context/luatex/luatex-fonts-otn.lua +++ b/tex/generic/context/luatex/luatex-fonts-otn.lua @@ -2038,14 +2038,21 @@ local autofeatures = fonts.analyzers.features -- was: constants local function initialize(sequence,script,language,enabled) local features = sequence.features if features then - for kind, scripts in next, features do - local valid = enabled[kind] - if valid then - local languages = scripts[script] or scripts[wildcard] - if languages and (languages[language] or languages[wildcard]) then - return { valid, autofeatures[kind] or false, sequence.chain or 0, kind, sequence } + local order = sequence.order + if order then + for i=1,#order do + local kind = order[i] -- + local valid = enabled[kind] + if valid then + local scripts = features[kind] -- + local languages = scripts[script] or scripts[wildcard] + if languages and (languages[language] or languages[wildcard]) then + return { valid, autofeatures[kind] or false, sequence.chain or 0, kind, sequence } + end end end + else + -- can't happen end end return false @@ -2074,19 +2081,12 @@ function otf.dataset(tfmdata,font) -- generic variant, overloaded in context } rs[language] = rl local sequences = tfmdata.resources.sequences --- setmetatableindex(rl, function(t,k) --- if type(k) == "number" then --- local v = enabled and initialize(sequences[k],script,language,enabled) --- t[k] = v --- return v --- end --- end) -for s=1,#sequences do - local v = enabled and initialize(sequences[s],script,language,enabled) - if v then - rl[#rl+1] = v - end -end + for s=1,#sequences do + local v = enabled and initialize(sequences[s],script,language,enabled) + if v then + rl[#rl+1] = v + end + end end return rl end |