From 778f381ba6a448ab00d67994a412dd4226d43238 Mon Sep 17 00:00:00 2001 From: Hans Hagen Date: Fri, 8 Oct 2021 20:46:55 +0200 Subject: 2021-10-08 20:07:00 --- .../context/documents/scite-context-readme.pdf | Bin 212528 -> 135520 bytes .../context/documents/scite-context-readme.tex | 83 +- .../lexers/data/scite-context-data-context.lua | 2 +- context/data/scite/context/lexers/lexer.lua | 3 - .../context/lexers/scite-context-lexer-bibtex.lua | 42 +- .../context/lexers/scite-context-lexer-bnf.lua | 34 +- .../context/lexers/scite-context-lexer-cld.lua | 19 +- .../context/lexers/scite-context-lexer-cpp-web.lua | 22 +- .../context/lexers/scite-context-lexer-cpp.lua | 77 +- .../context/lexers/scite-context-lexer-dummy.lua | 23 +- .../context/lexers/scite-context-lexer-json.lua | 67 +- .../lexers/scite-context-lexer-lua-longstring.lua | 21 +- .../context/lexers/scite-context-lexer-lua.lua | 199 +- .../context/lexers/scite-context-lexer-mps.lua | 127 +- .../context/lexers/scite-context-lexer-pdf.lua | 41 +- .../context/lexers/scite-context-lexer-sas.lua | 59 +- .../context/lexers/scite-context-lexer-sql.lua | 51 +- .../context/lexers/scite-context-lexer-tex-web.lua | 17 +- .../context/lexers/scite-context-lexer-tex.lua | 322 +- .../context/lexers/scite-context-lexer-txt.lua | 63 +- .../lexers/scite-context-lexer-web-snippets.lua | 13 +- .../context/lexers/scite-context-lexer-web.lua | 65 +- .../lexers/scite-context-lexer-xml-cdata.lua | 18 +- .../lexers/scite-context-lexer-xml-comment.lua | 18 +- .../lexers/scite-context-lexer-xml-script.lua | 18 +- .../context/lexers/scite-context-lexer-xml.lua | 107 +- .../scite/context/lexers/scite-context-lexer.lua | 3272 ++---- .../context/lexers/themes/scite-context-theme.lua | 63 +- .../context/scite-context-data-context.properties | 2 +- .../context/scite-context-external.properties | 98 +- .../data/scite/context/scite-context.properties | 78 +- context/data/scite/context/scite-ctx.lua | 2259 ++-- context/data/scite/context/scite-ctx.properties | 47 +- context/data/scite/context/scite-pragma.properties | 7 +- .../context/data/scite-context-data-bidi.lua | 10357 ------------------ .../context/data/scite-context-data-context.lua | 4 - .../context/data/scite-context-data-interfaces.lua | 4 - .../context/data/scite-context-data-metafun.lua | 4 - .../context/data/scite-context-data-metapost.lua | 9 - .../context/data/scite-context-data-tex.lua | 9 - context/data/textadept/context/init.lua | 147 - context/data/textadept/context/lexers/lexer.lua | 2686 ----- context/data/textadept/context/lexers/lexer.rme | 1 - .../context/lexers/scite-context-lexer-bibtex.lua | 195 - .../context/lexers/scite-context-lexer-bidi.lua | 598 -- .../context/lexers/scite-context-lexer-bnf.lua | 99 - .../context/lexers/scite-context-lexer-cld.lua | 23 - .../context/lexers/scite-context-lexer-cpp-web.lua | 23 - .../context/lexers/scite-context-lexer-cpp.lua | 199 - .../context/lexers/scite-context-lexer-dummy.lua | 35 - .../context/lexers/scite-context-lexer-json.lua | 101 - .../lexers/scite-context-lexer-lua-longstring.lua | 31 - .../context/lexers/scite-context-lexer-lua.lua | 396 - .../context/lexers/scite-context-lexer-mps.lua | 189 - .../lexers/scite-context-lexer-pdf-object.lua | 136 - .../lexers/scite-context-lexer-pdf-xref.lua | 43 - .../context/lexers/scite-context-lexer-pdf.lua | 218 - .../context/lexers/scite-context-lexer-sas.lua | 102 - .../context/lexers/scite-context-lexer-sql.lua | 238 - .../context/lexers/scite-context-lexer-tex-web.lua | 23 - .../context/lexers/scite-context-lexer-tex.lua | 588 -- .../context/lexers/scite-context-lexer-txt.lua | 80 - .../lexers/scite-context-lexer-web-snippets.lua | 132 - .../context/lexers/scite-context-lexer-web.lua | 67 - .../lexers/scite-context-lexer-xml-cdata.lua | 33 - .../lexers/scite-context-lexer-xml-comment.lua | 33 - .../lexers/scite-context-lexer-xml-script.lua | 33 - .../context/lexers/scite-context-lexer-xml.lua | 350 - .../context/lexers/scite-context-lexer.lua | 2686 ----- context/data/textadept/context/lexers/text.lua | 35 - .../context/modules/textadept-context-files.lua | 826 -- .../context/modules/textadept-context-runner.lua | 1100 -- .../context/modules/textadept-context-settings.lua | 152 - .../context/modules/textadept-context-types.lua | 175 - .../data/textadept/context/textadept-context.cmd | 56 - .../data/textadept/context/textadept-context.sh | 12 - .../context/themes/scite-context-theme.lua | 159 - .../context/syntaxes/context-syntax-tex.json | 2 +- .../general/manuals/lowlevel-alignments.pdf | Bin 74011 -> 74661 bytes .../context/2021/context-2021-compactfonts.pdf | Bin 0 -> 56466 bytes .../context/2021/context-2021-compactfonts.tex | 609 ++ .../context/2021/context-2021-localcontrol.pdf | Bin 0 -> 35875 bytes .../context/2021/context-2021-localcontrol.tex | 369 + .../context/2021/context-2021-luametafun.pdf | Bin 0 -> 37579 bytes .../context/2021/context-2021-luametafun.tex | 362 + .../context/2021/context-2021-math.pdf | Bin 0 -> 21605 bytes .../context/2021/context-2021-math.tex | 236 + .../2021/context-2021-overloadprotection.pdf | Bin 0 -> 32849 bytes .../2021/context-2021-overloadprotection.tex | 298 + .../context/2021/context-2021-paragraphs.pdf | Bin 0 -> 24329 bytes .../context/2021/context-2021-paragraphs.tex | 162 + .../context/2021/context-2021-programming.pdf | Bin 0 -> 32433 bytes .../context/2021/context-2021-programming.tex | 325 + doc/context/scripts/mkiv/mtx-pdf.html | 1 + doc/context/scripts/mkiv/mtx-pdf.man | 3 + doc/context/scripts/mkiv/mtx-pdf.xml | 2 + .../manuals/lowlevel/lowlevel-alignments.tex | 43 +- .../general/manuals/lowlevel/lowlevel-inserts.tex | 10 +- .../general/manuals/metafun/metafun-examples.tex | 2 +- metapost/context/base/mpxl/mp-luas.mpxl | 4 +- metapost/context/base/mpxl/mp-tool.mpxl | 36 +- scripts/context/lua/mtx-context.lua | 12 +- scripts/context/lua/mtx-fonts.lua | 4 +- scripts/context/lua/mtx-pdf.lua | 25 +- tex/context/base/mkii/cont-new.mkii | 2 +- tex/context/base/mkii/context.mkii | 2 +- tex/context/base/mkii/mult-cs.mkii | 1 + tex/context/base/mkiv/char-def.lua | 2 +- tex/context/base/mkiv/cont-new.mkiv | 2 +- tex/context/base/mkiv/context.mkiv | 2 +- tex/context/base/mkiv/font-otr.lua | 3 +- tex/context/base/mkiv/math-fbk.lua | 30 +- tex/context/base/mkiv/mult-def.lua | 3 + tex/context/base/mkiv/mult-low.lua | 2 +- tex/context/base/mkiv/publ-aut.lua | 16 + tex/context/base/mkiv/status-files.pdf | Bin 24914 -> 24847 bytes tex/context/base/mkiv/status-lua.pdf | Bin 253654 -> 253894 bytes tex/context/base/mkiv/typo-drp.lua | 13 +- tex/context/base/mkiv/util-sci.lua | 19 +- tex/context/base/mkxl/cont-new.mkxl | 2 +- tex/context/base/mkxl/context.mkxl | 2 +- tex/context/base/mkxl/lang-ini.mkxl | 3 + tex/context/base/mkxl/lpdf-ini.lmt | 10 +- tex/context/base/mkxl/lpdf-pde.lmt | 225 +- tex/context/base/mkxl/mlib-lmp.lmt | 15 + tex/context/base/mkxl/mlib-pdf.lmt | 1 + tex/context/base/mkxl/page-mix.mkxl | 1 + tex/context/base/mkxl/spac-ali.mkxl | 3 +- tex/context/base/mkxl/strc-itm.mklx | 30 +- tex/context/base/mkxl/strc-mar.mkxl | 12 +- tex/context/base/mkxl/strc-not.lmt | 6 +- tex/context/base/mkxl/typo-drp.lmt | 13 +- tex/context/interface/mkii/cont-cs.xml | 10394 +------------------ tex/context/interface/mkii/keys-cs.xml | 1 + tex/context/modules/mkiv/m-scite.mkiv | 10 +- tex/context/modules/mkxl/m-openstreetmap.lmt | 127 +- tex/context/modules/mkxl/m-openstreetmap.mkxl | 50 +- tex/generic/context/luatex/luatex-fonts-merged.lua | 3 +- 138 files changed, 5801 insertions(+), 37433 deletions(-) delete mode 100644 context/data/scite/context/lexers/lexer.lua delete mode 100644 context/data/textadept/context/data/scite-context-data-bidi.lua delete mode 100644 context/data/textadept/context/data/scite-context-data-context.lua delete mode 100644 context/data/textadept/context/data/scite-context-data-interfaces.lua delete mode 100644 context/data/textadept/context/data/scite-context-data-metafun.lua delete mode 100644 context/data/textadept/context/data/scite-context-data-metapost.lua delete mode 100644 context/data/textadept/context/data/scite-context-data-tex.lua delete mode 100644 context/data/textadept/context/init.lua delete mode 100644 context/data/textadept/context/lexers/lexer.lua delete mode 100644 context/data/textadept/context/lexers/lexer.rme delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-bibtex.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-bidi.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-bnf.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-cld.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-cpp-web.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-cpp.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-dummy.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-json.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-lua-longstring.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-lua.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-mps.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-pdf-object.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-pdf-xref.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-pdf.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-sas.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-sql.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-tex-web.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-tex.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-txt.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-web-snippets.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-web.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-xml-cdata.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-xml-comment.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-xml-script.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer-xml.lua delete mode 100644 context/data/textadept/context/lexers/scite-context-lexer.lua delete mode 100644 context/data/textadept/context/lexers/text.lua delete mode 100644 context/data/textadept/context/modules/textadept-context-files.lua delete mode 100644 context/data/textadept/context/modules/textadept-context-runner.lua delete mode 100644 context/data/textadept/context/modules/textadept-context-settings.lua delete mode 100644 context/data/textadept/context/modules/textadept-context-types.lua delete mode 100644 context/data/textadept/context/textadept-context.cmd delete mode 100644 context/data/textadept/context/textadept-context.sh delete mode 100644 context/data/textadept/context/themes/scite-context-theme.lua create mode 100644 doc/context/presentations/context/2021/context-2021-compactfonts.pdf create mode 100644 doc/context/presentations/context/2021/context-2021-compactfonts.tex create mode 100644 doc/context/presentations/context/2021/context-2021-localcontrol.pdf create mode 100644 doc/context/presentations/context/2021/context-2021-localcontrol.tex create mode 100644 doc/context/presentations/context/2021/context-2021-luametafun.pdf create mode 100644 doc/context/presentations/context/2021/context-2021-luametafun.tex create mode 100644 doc/context/presentations/context/2021/context-2021-math.pdf create mode 100644 doc/context/presentations/context/2021/context-2021-math.tex create mode 100644 doc/context/presentations/context/2021/context-2021-overloadprotection.pdf create mode 100644 doc/context/presentations/context/2021/context-2021-overloadprotection.tex create mode 100644 doc/context/presentations/context/2021/context-2021-paragraphs.pdf create mode 100644 doc/context/presentations/context/2021/context-2021-paragraphs.tex create mode 100644 doc/context/presentations/context/2021/context-2021-programming.pdf create mode 100644 doc/context/presentations/context/2021/context-2021-programming.tex diff --git a/context/data/scite/context/documents/scite-context-readme.pdf b/context/data/scite/context/documents/scite-context-readme.pdf index 834cd54da..8de9c69d0 100644 Binary files a/context/data/scite/context/documents/scite-context-readme.pdf and b/context/data/scite/context/documents/scite-context-readme.pdf differ diff --git a/context/data/scite/context/documents/scite-context-readme.tex b/context/data/scite/context/documents/scite-context-readme.tex index fe5120264..df25db367 100644 --- a/context/data/scite/context/documents/scite-context-readme.tex +++ b/context/data/scite/context/documents/scite-context-readme.tex @@ -102,7 +102,8 @@ frame=off, foregroundcolor=gray] {\definedfont[SerifBold sa 10]SciTE\endgraf - \definedfont[SerifBold sa 2.48]IN CONTEXT MkIV\kern.25\bodyfontsize} +% \definedfont[SerifBold sa 2.48]IN CONTEXT MkIV\kern.25\bodyfontsize} + \definedfont[SerifBold sa 1.60]IN CONTEXT MkIV & LMTX\kern.25\bodyfontsize} \startTEXpage \tightlayer[TitlePage] @@ -110,21 +111,14 @@ % main text -\startsubject[title={Warning}] - -\SCITE\ version 3.61 works ok but 3.62 crashes. It'a a real pity that \SCITE\ -doesn't have the scintillua lexer built in, which would also make integration a -bit nicer by sharing the \LUA\ instance. The \CONTEXT\ lexing discussed here is -the lexing I assume when using \CONTEXT\ \MKIV, but alas it's not easy to get it -running on \UNIX\ and on \MACOSX\ there is no \LUA\ lexing available. - \startsubject[title={About \SCITE}] For a long time at \PRAGMA\ we used \TEXEDIT, an editor we'd written in \MODULA. It had some project management features and recognized the project structure in \CONTEXT\ documents. Later we rewrote this to a platform independent reimplementation called \TEXWORK\ written in \PERLTK\ (not to be confused with -the editor with the plural name). +the editor with the plural name) that when I checked last still works okay, which +is proof that \PERLTK\ has been stable for decades. In the beginning of the century I can into \SCITE, written by Neil Hodgson. Although the mentioned editors provide some functionality not present in \SCITE\ @@ -142,8 +136,15 @@ a \TEX/\LUA\ hybrid, it made sense to look into this. The result is a couple of lexers that suit \TEX, \METAPOST\ and \LUA\ usage in \CONTEXT\ \MKIV. As we also use \XML\ as input and output format a lexer for \XML\ is also provided. And because \PDF\ is one of the backend formats lexing of \PDF\ is also implemented. -\footnote {In the process some of the general lexing framework was adapted to -suit our demands for speed. We ship these files as well.} +In the process some of the general lexing framework were adapted to suit our +demands for speed. For a long time we shipped these files as well but at point I +decided that it made no sense to keep adapting to the relatively frequent changes +in the \API. The last version in the 3.* series worked okay, in the 4.* series +things failed but we didn't adapt, and when series 5.* showed up I decided to +drop the old lexer compatibility. I assume that a version of \SCITE\ is run that +has \LPEG\ available in the main \LUA\ instance and that also supports copying +text fragments using the editor object. (Till that is the case, we provide binaries +with the \CONTEXT\ distribution.) In the \CONTEXT\ (standalone) distribution you will find the relevant files under: @@ -227,57 +228,43 @@ Where the second path is the path we will put more files. \stopsubject -\startsubject[title={Installing \type {scintillua}}] - -Next you need to install the lpeg lexers. \footnote {Versions later than 2.11 -will not run on \MSWINDOWS\ 2K. In that case you need to comment the external -lexer import.} The library is part of the \type {textadept} editor by Mitchell -(\hyphenatedurl {mitchell.att.foicica.com}) which is also based on scintilla: -The archive can be fetched from: - -\starttyping -http://foicica.com/scintillua/ -\stoptyping +\startsubject[title={Binaries}] -On \MSWINDOWS\ you need to copy the files to the \type {wscite} folder (so we end -up with a \type {lexers} subfolder there). For \LINUX\ the place depends on the -distribution, for instance \type {/usr/share/scite}; this is the place where the -regular properties files live. \footnote {If you update, don't do so without -testing first. Sometimes there are changes in \SCITE\ that influence the lexers -in which case you have to wait till we have update them to suit those changes.} - -So, you end up, on \MSWINDOWS\ with: +When you compile binaries yourself or get them from somewhere you need to make +sure that they en dup in te right place (see previous section): When you're on +\MSWINDOWS\ they fly to: \starttyping -c:\data\system\scite\wscite\lexers +wscite/scite.exe +wscite/scilexer.dll \stoptyping -And on \LINUX: +And on \LINUX\ then end up in: \starttyping -/usr/share/scite/lexers +/usr/bin/SciTE +/usr/bin/libscintilla.so +/usr/bin/liblexilla.so \stoptyping -Beware: if you're on a 64 bit system, you need to rename the 64 bit \type {so} -library into one without a number. Unfortunately the 64 bit library is now always -available which can give surprises when the operating system gets updates. In such -a case you should downgrade or use \type {wine} with the \MSWINDOWS\ binaries -instead. After installation you need to restart \SCITE\ in order to see if things -work out as expected. +Because we only use the official \LUA\ interface methods the lexers should just work, +assuming that you have imported the \typ {context/scite-context-user} properties file. \stopsubject \startsubject[title={Installing the \CONTEXT\ lexers}] -When we started using this nice extension, we ran into issues and as a -consequence shipped a patched \LUA\ code. We also needed some more control as we -wanted to provide more features and complex nested lexers. Because the library -\API\ changed a couple of times, we now have our own variant which will be -cleaned up over time to be more consistent with our other \LUA\ code (so that we -can also use it in \CONTEXT\ as variant verbatim lexer). We hope to be able to -use the \type {scintillua} library as it does the job. +% When we started using this nice extension, we ran into issues and as a +% consequence shipped a patched \LUA\ code. We also needed some more control as we +% wanted to provide more features and complex nested lexers. Because the library +% \API\ changed a couple of times, we now have our own variant which will be +% cleaned up over time to be more consistent with our other \LUA\ code (so that we +% can also use it in \CONTEXT\ as variant verbatim lexer). We hope to be able to +% use the \type {scintillua} library as it does the job. +% +% Anyway, if you want to use \CONTEXT, you need to copy the relevant files from -Anyway, if you want to use \CONTEXT, you need to copy the relevant files from +If you want to use \CONTEXT, you need to copy the relevant files from \starttyping /tex/texmf-context/context/data/scite diff --git a/context/data/scite/context/lexers/data/scite-context-data-context.lua b/context/data/scite/context/lexers/data/scite-context-data-context.lua index 315e98bef..5ddc01b38 100644 --- a/context/data/scite/context/lexers/data/scite-context-data-context.lua +++ b/context/data/scite/context/lexers/data/scite-context-data-context.lua @@ -1,4 +1,4 @@ return { - ["constants"]={ "zerocount", "minusone", "minustwo", "plusone", "plustwo", "plusthree", "plusfour", "plusfive", "plussix", "plusseven", "pluseight", "plusnine", "plusten", "pluseleven", "plustwelve", "plussixteen", "plusfifty", "plushundred", "plusonehundred", "plustwohundred", "plusfivehundred", "plusthousand", "plustenthousand", "plustwentythousand", "medcard", "maxcard", "maxcardminusone", "zeropoint", "onepoint", "halfapoint", "onebasepoint", "maxcount", "maxdimen", "scaledpoint", "thousandpoint", "points", "halfpoint", "zeroskip", "zeromuskip", "onemuskip", "pluscxxvii", "pluscxxviii", "pluscclv", "pluscclvi", "normalpagebox", "directionlefttoright", "directionrighttoleft", "endoflinetoken", "outputnewlinechar", "emptytoks", "empty", "undefined", "prerollrun", "voidbox", "emptybox", "emptyvbox", "emptyhbox", "bigskipamount", "medskipamount", "smallskipamount", "fmtname", "fmtversion", "texengine", "texenginename", "texengineversion", "texenginefunctionality", "luatexengine", "pdftexengine", "xetexengine", "unknownengine", "contextformat", "contextversion", "contextlmtxmode", "contextmark", "mksuffix", "activecatcode", "bgroup", "egroup", "endline", "conditionaltrue", "conditionalfalse", "attributeunsetvalue", "statuswrite", "uprotationangle", "rightrotationangle", "downrotationangle", "leftrotationangle", "inicatcodes", "ctxcatcodes", "texcatcodes", "notcatcodes", "txtcatcodes", "vrbcatcodes", "prtcatcodes", "nilcatcodes", "luacatcodes", "tpacatcodes", "tpbcatcodes", "xmlcatcodes", "ctdcatcodes", "rlncatcodes", "escapecatcode", "begingroupcatcode", "endgroupcatcode", "mathshiftcatcode", "alignmentcatcode", "endoflinecatcode", "parametercatcode", "superscriptcatcode", "subscriptcatcode", "ignorecatcode", "spacecatcode", "lettercatcode", "othercatcode", "activecatcode", "commentcatcode", "invalidcatcode", "tabasciicode", "newlineasciicode", "formfeedasciicode", "endoflineasciicode", "endoffileasciicode", "commaasciicode", "spaceasciicode", "periodasciicode", "hashasciicode", "dollarasciicode", "commentasciicode", "ampersandasciicode", "colonasciicode", "backslashasciicode", "circumflexasciicode", "underscoreasciicode", "leftbraceasciicode", "barasciicode", "rightbraceasciicode", "tildeasciicode", "delasciicode", "leftparentasciicode", "rightparentasciicode", "lessthanasciicode", "morethanasciicode", "doublecommentsignal", "atsignasciicode", "exclamationmarkasciicode", "questionmarkasciicode", "doublequoteasciicode", "singlequoteasciicode", "forwardslashasciicode", "primeasciicode", "hyphenasciicode", "percentasciicode", "leftbracketasciicode", "rightbracketasciicode", "hsizefrozenparcode", "skipfrozenparcode", "hangfrozenparcode", "indentfrozenparcode", "parfillfrozenparcode", "adjustfrozenparcode", "protrudefrozenparcode", "tolerancefrozenparcode", "stretchfrozenparcode", "loosenessfrozenparcode", "lastlinefrozenparcode", "linepenaltyfrozenparcode", "clubpenaltyfrozenparcode", "widowpenaltyfrozenparcode", "displaypenaltyfrozenparcode", "brokenpenaltyfrozenparcode", "demeritsfrozenparcode", "shapefrozenparcode", "linefrozenparcode", "hyphenationfrozenparcode", "allfrozenparcode", "activemathcharcode", "activetabtoken", "activeformfeedtoken", "activeendoflinetoken", "batchmodecode", "nonstopmodecode", "scrollmodecode", "errorstopmodecode", "bottomlevelgroupcode", "simplegroupcode", "hboxgroupcode", "adjustedhboxgroupcode", "vboxgroupcode", "vtopgroupcode", "aligngroupcode", "noaligngroupcode", "outputgroupcode", "mathgroupcode", "discretionarygroupcode", "insertgroupcode", "vadjustgroupcode", "vcentergroupcode", "mathabovegroupcode", "mathchoicegroupcode", "alsosimplegroupcode", "semisimplegroupcode", "mathshiftgroupcode", "mathleftgroupcode", "localboxgroupcode", "splitoffgroupcode", "splitkeepgroupcode", "preamblegroupcode", "alignsetgroupcode", "finrowgroupcode", "discretionarygroupcode", "charnodecode", "hlistnodecode", "vlistnodecode", "rulenodecode", "insertnodecode", "marknodecode", "adjustnodecode", "ligaturenodecode", "discretionarynodecode", "whatsitnodecode", "mathnodecode", "gluenodecode", "kernnodecode", "penaltynodecode", "unsetnodecode", "mathsnodecode", "charifcode", "catifcode", "numifcode", "dimifcode", "oddifcode", "vmodeifcode", "hmodeifcode", "mmodeifcode", "innerifcode", "voidifcode", "hboxifcode", "vboxifcode", "xifcode", "eofifcode", "trueifcode", "falseifcode", "caseifcode", "definedifcode", "csnameifcode", "fontcharifcode", "overrulemathcontrolcode", "underrulemathcontrolcode", "radicalrulemathcontrolcode", "fractionrulemathcontrolcode", "accentskewhalfmathcontrolcode", "accentskewapplymathcontrolcode", "accentitalickernmathcontrolcode", "delimiteritalickernmathcontrolcode", "orditalickernmathcontrolcode", "charitalicwidthmathcontrolcode", "charitalicnoreboxmathcontrolcode", "boxednoitalickernmathcontrolcode", "nostaircasekernmathcontrolcode", "textitalickernmathcontrolcode", "noligaturingglyphoptioncode", "nokerningglyphoptioncode", "noexpansionglyphoptioncode", "noprotrusionglyphoptioncode", "noleftkerningglyphoptioncode", "noleftligaturingglyphoptioncode", "norightkerningglyphoptioncode", "norightligaturingglyphoptioncode", "noitaliccorrectionglyphoptioncode", "normalparcontextcode", "vmodeparcontextcode", "vboxparcontextcode", "vtopparcontextcode", "vcenterparcontextcode", "vadjustparcontextcode", "insertparcontextcode", "outputparcontextcode", "alignparcontextcode", "noalignparcontextcode", "spanparcontextcode", "resetparcontextcode", "fontslantperpoint", "fontinterwordspace", "fontinterwordstretch", "fontinterwordshrink", "fontexheight", "fontemwidth", "fontextraspace", "slantperpoint", "mathexheight", "mathemwidth", "interwordspace", "interwordstretch", "interwordshrink", "exheight", "emwidth", "extraspace", "mathaxisheight", "muquad", "startmode", "stopmode", "startnotmode", "stopnotmode", "startmodeset", "stopmodeset", "doifmode", "doifelsemode", "doifmodeelse", "doifnotmode", "startmodeset", "stopmodeset", "startallmodes", "stopallmodes", "startnotallmodes", "stopnotallmodes", "doifallmodes", "doifelseallmodes", "doifallmodeselse", "doifnotallmodes", "startenvironment", "stopenvironment", "environment", "startcomponent", "stopcomponent", "component", "startproduct", "stopproduct", "product", "startproject", "stopproject", "project", "starttext", "stoptext", "startnotext", "stopnotext", "startdocument", "stopdocument", "documentvariable", "unexpandeddocumentvariable", "setupdocument", "presetdocument", "doifelsedocumentvariable", "doifdocumentvariableelse", "doifdocumentvariable", "doifnotdocumentvariable", "startmodule", "stopmodule", "usemodule", "usetexmodule", "useluamodule", "setupmodule", "currentmoduleparameter", "moduleparameter", "everystarttext", "everystoptext", "startTEXpage", "stopTEXpage", "enablemode", "disablemode", "preventmode", "definemode", "globalenablemode", "globaldisablemode", "globalpreventmode", "pushmode", "popmode", "typescriptone", "typescripttwo", "typescriptthree", "mathsizesuffix", "mathordcode", "mathopcode", "mathbincode", "mathrelcode", "mathopencode", "mathclosecode", "mathpunctcode", "mathalphacode", "mathinnercode", "mathnothingcode", "mathlimopcode", "mathnolopcode", "mathboxcode", "mathchoicecode", "mathaccentcode", "mathradicalcode", "constantnumber", "constantnumberargument", "constantdimen", "constantdimenargument", "constantemptyargument", "continueifinputfile", "luastringsep", "!!bs", "!!es", "lefttorightmark", "righttoleftmark", "lrm", "rlm", "bidilre", "bidirle", "bidipop", "bidilro", "bidirlo", "breakablethinspace", "nobreakspace", "nonbreakablespace", "narrownobreakspace", "zerowidthnobreakspace", "ideographicspace", "ideographichalffillspace", "twoperemspace", "threeperemspace", "fourperemspace", "fiveperemspace", "sixperemspace", "figurespace", "punctuationspace", "hairspace", "enquad", "emquad", "zerowidthspace", "zerowidthnonjoiner", "zerowidthjoiner", "zwnj", "zwj", "optionalspace", "asciispacechar", "softhyphen", "Ux", "eUx", "Umathaccents", "parfillleftskip", "parfillrightskip", "startlmtxmode", "stoplmtxmode", "startmkivmode", "stopmkivmode", "wildcardsymbol", "normalhyphenationcode", "automatichyphenationcode", "explicithyphenationcode", "syllablehyphenationcode", "uppercasehyphenationcode", "collapsehyphenationmcode", "compoundhyphenationcode", "strictstarthyphenationcode", "strictendhyphenationcode", "automaticpenaltyhyphenationcode", "explicitpenaltyhyphenationcode", "permitgluehyphenationcode", "permitallhyphenationcode", "permitmathreplacehyphenationcode", "forcecheckhyphenationcode", "lazyligatureshyphenationcode", "forcehandlerhyphenationcode", "feedbackcompoundhyphenationcode", "ignoreboundshyphenationcode", "partialhyphenationcode", "completehyphenationcode", "normalizelinenormalizecode", "parindentskipnormalizecode", "swaphangindentnormalizecode", "swapparsshapenormalizecode", "breakafterdirnormalizecode", "removemarginkernsnormalizecode", "clipwidthnormalizecode", "flattendiscretionariesnormalizecode", "discardzerotabskipsnormalizecode", "noligaturingglyphoptioncode", "nokerningglyphoptioncode", "noleftligatureglyphoptioncode", "noleftkernglyphoptioncode", "norightligatureglyphoptioncode", "norightkernglyphoptioncode", "noexpansionglyphoptioncode", "noprotrusionglyphoptioncode", "noitaliccorrectionglyphoptioncode", "nokerningcode", "noligaturingcode", "frozenflagcode", "tolerantflagcode", "protectedflagcode", "primitiveflagcode", "permanentflagcode", "noalignedflagcode", "immutableflagcode", "mutableflagcode", "globalflagcode", "overloadedflagcode", "immediateflagcode", "conditionalflagcode", "valueflagcode", "instanceflagcode", "ordmathflattencode", "binmathflattencode", "relmathflattencode", "punctmathflattencode", "innermathflattencode", "normalworddiscoptioncode", "preworddiscoptioncode", "postworddiscoptioncode", "continuewhenlmtxmode" }, + ["constants"]={ "zerocount", "minusone", "minustwo", "plusone", "plustwo", "plusthree", "plusfour", "plusfive", "plussix", "plusseven", "pluseight", "plusnine", "plusten", "pluseleven", "plustwelve", "plussixteen", "plusfifty", "plushundred", "plusonehundred", "plustwohundred", "plusfivehundred", "plusthousand", "plustenthousand", "plustwentythousand", "medcard", "maxcard", "maxcardminusone", "zeropoint", "onepoint", "halfapoint", "onebasepoint", "maxcount", "maxdimen", "scaledpoint", "thousandpoint", "points", "halfpoint", "zeroskip", "zeromuskip", "onemuskip", "pluscxxvii", "pluscxxviii", "pluscclv", "pluscclvi", "normalpagebox", "directionlefttoright", "directionrighttoleft", "endoflinetoken", "outputnewlinechar", "emptytoks", "empty", "undefined", "prerollrun", "voidbox", "emptybox", "emptyvbox", "emptyhbox", "bigskipamount", "medskipamount", "smallskipamount", "fmtname", "fmtversion", "texengine", "texenginename", "texengineversion", "texenginefunctionality", "luatexengine", "pdftexengine", "xetexengine", "unknownengine", "contextformat", "contextversion", "contextlmtxmode", "contextmark", "mksuffix", "activecatcode", "bgroup", "egroup", "endline", "conditionaltrue", "conditionalfalse", "attributeunsetvalue", "statuswrite", "uprotationangle", "rightrotationangle", "downrotationangle", "leftrotationangle", "inicatcodes", "ctxcatcodes", "texcatcodes", "notcatcodes", "txtcatcodes", "vrbcatcodes", "prtcatcodes", "nilcatcodes", "luacatcodes", "tpacatcodes", "tpbcatcodes", "xmlcatcodes", "ctdcatcodes", "rlncatcodes", "escapecatcode", "begingroupcatcode", "endgroupcatcode", "mathshiftcatcode", "alignmentcatcode", "endoflinecatcode", "parametercatcode", "superscriptcatcode", "subscriptcatcode", "ignorecatcode", "spacecatcode", "lettercatcode", "othercatcode", "activecatcode", "commentcatcode", "invalidcatcode", "tabasciicode", "newlineasciicode", "formfeedasciicode", "endoflineasciicode", "endoffileasciicode", "commaasciicode", "spaceasciicode", "periodasciicode", "hashasciicode", "dollarasciicode", "commentasciicode", "ampersandasciicode", "colonasciicode", "backslashasciicode", "circumflexasciicode", "underscoreasciicode", "leftbraceasciicode", "barasciicode", "rightbraceasciicode", "tildeasciicode", "delasciicode", "leftparentasciicode", "rightparentasciicode", "lessthanasciicode", "morethanasciicode", "doublecommentsignal", "atsignasciicode", "exclamationmarkasciicode", "questionmarkasciicode", "doublequoteasciicode", "singlequoteasciicode", "forwardslashasciicode", "primeasciicode", "hyphenasciicode", "percentasciicode", "leftbracketasciicode", "rightbracketasciicode", "hsizefrozenparcode", "skipfrozenparcode", "hangfrozenparcode", "indentfrozenparcode", "parfillfrozenparcode", "adjustfrozenparcode", "protrudefrozenparcode", "tolerancefrozenparcode", "stretchfrozenparcode", "loosenessfrozenparcode", "lastlinefrozenparcode", "linepenaltyfrozenparcode", "clubpenaltyfrozenparcode", "widowpenaltyfrozenparcode", "displaypenaltyfrozenparcode", "brokenpenaltyfrozenparcode", "demeritsfrozenparcode", "shapefrozenparcode", "linefrozenparcode", "hyphenationfrozenparcode", "allfrozenparcode", "activemathcharcode", "activetabtoken", "activeformfeedtoken", "activeendoflinetoken", "batchmodecode", "nonstopmodecode", "scrollmodecode", "errorstopmodecode", "bottomlevelgroupcode", "simplegroupcode", "hboxgroupcode", "adjustedhboxgroupcode", "vboxgroupcode", "vtopgroupcode", "aligngroupcode", "noaligngroupcode", "outputgroupcode", "mathgroupcode", "discretionarygroupcode", "insertgroupcode", "vadjustgroupcode", "vcentergroupcode", "mathabovegroupcode", "mathchoicegroupcode", "alsosimplegroupcode", "semisimplegroupcode", "mathshiftgroupcode", "mathleftgroupcode", "localboxgroupcode", "splitoffgroupcode", "splitkeepgroupcode", "preamblegroupcode", "alignsetgroupcode", "finrowgroupcode", "discretionarygroupcode", "charnodecode", "hlistnodecode", "vlistnodecode", "rulenodecode", "insertnodecode", "marknodecode", "adjustnodecode", "ligaturenodecode", "discretionarynodecode", "whatsitnodecode", "mathnodecode", "gluenodecode", "kernnodecode", "penaltynodecode", "unsetnodecode", "mathsnodecode", "charifcode", "catifcode", "numifcode", "dimifcode", "oddifcode", "vmodeifcode", "hmodeifcode", "mmodeifcode", "innerifcode", "voidifcode", "hboxifcode", "vboxifcode", "xifcode", "eofifcode", "trueifcode", "falseifcode", "caseifcode", "definedifcode", "csnameifcode", "fontcharifcode", "overrulemathcontrolcode", "underrulemathcontrolcode", "radicalrulemathcontrolcode", "fractionrulemathcontrolcode", "accentskewhalfmathcontrolcode", "accentskewapplymathcontrolcode", "accentitalickernmathcontrolcode", "delimiteritalickernmathcontrolcode", "orditalickernmathcontrolcode", "charitalicwidthmathcontrolcode", "charitalicnoreboxmathcontrolcode", "boxednoitalickernmathcontrolcode", "nostaircasekernmathcontrolcode", "textitalickernmathcontrolcode", "noligaturingglyphoptioncode", "nokerningglyphoptioncode", "noexpansionglyphoptioncode", "noprotrusionglyphoptioncode", "noleftkerningglyphoptioncode", "noleftligaturingglyphoptioncode", "norightkerningglyphoptioncode", "norightligaturingglyphoptioncode", "noitaliccorrectionglyphoptioncode", "normalparcontextcode", "vmodeparcontextcode", "vboxparcontextcode", "vtopparcontextcode", "vcenterparcontextcode", "vadjustparcontextcode", "insertparcontextcode", "outputparcontextcode", "alignparcontextcode", "noalignparcontextcode", "spanparcontextcode", "resetparcontextcode", "fontslantperpoint", "fontinterwordspace", "fontinterwordstretch", "fontinterwordshrink", "fontexheight", "fontemwidth", "fontextraspace", "slantperpoint", "mathexheight", "mathemwidth", "interwordspace", "interwordstretch", "interwordshrink", "exheight", "emwidth", "extraspace", "mathaxisheight", "muquad", "startmode", "stopmode", "startnotmode", "stopnotmode", "startmodeset", "stopmodeset", "doifmode", "doifelsemode", "doifmodeelse", "doifnotmode", "startmodeset", "stopmodeset", "startallmodes", "stopallmodes", "startnotallmodes", "stopnotallmodes", "doifallmodes", "doifelseallmodes", "doifallmodeselse", "doifnotallmodes", "startenvironment", "stopenvironment", "environment", "startcomponent", "stopcomponent", "component", "startproduct", "stopproduct", "product", "startproject", "stopproject", "project", "starttext", "stoptext", "startnotext", "stopnotext", "startdocument", "stopdocument", "documentvariable", "unexpandeddocumentvariable", "setupdocument", "presetdocument", "doifelsedocumentvariable", "doifdocumentvariableelse", "doifdocumentvariable", "doifnotdocumentvariable", "startmodule", "stopmodule", "usemodule", "usetexmodule", "useluamodule", "setupmodule", "currentmoduleparameter", "moduleparameter", "everystarttext", "everystoptext", "startTEXpage", "stopTEXpage", "enablemode", "disablemode", "preventmode", "definemode", "globalenablemode", "globaldisablemode", "globalpreventmode", "pushmode", "popmode", "typescriptone", "typescripttwo", "typescriptthree", "mathsizesuffix", "mathordcode", "mathopcode", "mathbincode", "mathrelcode", "mathopencode", "mathclosecode", "mathpunctcode", "mathalphacode", "mathinnercode", "mathnothingcode", "mathlimopcode", "mathnolopcode", "mathboxcode", "mathchoicecode", "mathaccentcode", "mathradicalcode", "constantnumber", "constantnumberargument", "constantdimen", "constantdimenargument", "constantemptyargument", "continueifinputfile", "luastringsep", "!!bs", "!!es", "lefttorightmark", "righttoleftmark", "lrm", "rlm", "bidilre", "bidirle", "bidipop", "bidilro", "bidirlo", "breakablethinspace", "nobreakspace", "nonbreakablespace", "narrownobreakspace", "zerowidthnobreakspace", "ideographicspace", "ideographichalffillspace", "twoperemspace", "threeperemspace", "fourperemspace", "fiveperemspace", "sixperemspace", "figurespace", "punctuationspace", "hairspace", "enquad", "emquad", "zerowidthspace", "zerowidthnonjoiner", "zerowidthjoiner", "zwnj", "zwj", "optionalspace", "asciispacechar", "softhyphen", "Ux", "eUx", "Umathaccents", "parfillleftskip", "parfillrightskip", "startlmtxmode", "stoplmtxmode", "startmkivmode", "stopmkivmode", "wildcardsymbol", "normalhyphenationcode", "automatichyphenationcode", "explicithyphenationcode", "syllablehyphenationcode", "uppercasehyphenationcode", "collapsehyphenationcode", "compoundhyphenationcode", "strictstarthyphenationcode", "strictendhyphenationcode", "automaticpenaltyhyphenationcode", "explicitpenaltyhyphenationcode", "permitgluehyphenationcode", "permitallhyphenationcode", "permitmathreplacehyphenationcode", "forcecheckhyphenationcode", "lazyligatureshyphenationcode", "forcehandlerhyphenationcode", "feedbackcompoundhyphenationcode", "ignoreboundshyphenationcode", "partialhyphenationcode", "completehyphenationcode", "normalizelinenormalizecode", "parindentskipnormalizecode", "swaphangindentnormalizecode", "swapparsshapenormalizecode", "breakafterdirnormalizecode", "removemarginkernsnormalizecode", "clipwidthnormalizecode", "flattendiscretionariesnormalizecode", "discardzerotabskipsnormalizecode", "noligaturingglyphoptioncode", "nokerningglyphoptioncode", "noleftligatureglyphoptioncode", "noleftkernglyphoptioncode", "norightligatureglyphoptioncode", "norightkernglyphoptioncode", "noexpansionglyphoptioncode", "noprotrusionglyphoptioncode", "noitaliccorrectionglyphoptioncode", "nokerningcode", "noligaturingcode", "frozenflagcode", "tolerantflagcode", "protectedflagcode", "primitiveflagcode", "permanentflagcode", "noalignedflagcode", "immutableflagcode", "mutableflagcode", "globalflagcode", "overloadedflagcode", "immediateflagcode", "conditionalflagcode", "valueflagcode", "instanceflagcode", "ordmathflattencode", "binmathflattencode", "relmathflattencode", "punctmathflattencode", "innermathflattencode", "normalworddiscoptioncode", "preworddiscoptioncode", "postworddiscoptioncode", "continuewhenlmtxmode" }, ["helpers"]={ "startsetups", "stopsetups", "startxmlsetups", "stopxmlsetups", "startluasetups", "stopluasetups", "starttexsetups", "stoptexsetups", "startrawsetups", "stoprawsetups", "startlocalsetups", "stoplocalsetups", "starttexdefinition", "stoptexdefinition", "starttexcode", "stoptexcode", "startcontextcode", "stopcontextcode", "startcontextdefinitioncode", "stopcontextdefinitioncode", "texdefinition", "doifelsesetups", "doifsetupselse", "doifsetups", "doifnotsetups", "setup", "setups", "texsetup", "xmlsetup", "luasetup", "directsetup", "fastsetup", "copysetups", "resetsetups", "doifelsecommandhandler", "doifcommandhandlerelse", "doifnotcommandhandler", "doifcommandhandler", "newmode", "setmode", "resetmode", "newsystemmode", "setsystemmode", "resetsystemmode", "pushsystemmode", "popsystemmode", "globalsetmode", "globalresetmode", "globalsetsystemmode", "globalresetsystemmode", "booleanmodevalue", "newcount", "newdimen", "newskip", "newmuskip", "newbox", "newtoks", "newread", "newwrite", "newmarks", "newinsert", "newattribute", "newif", "newlanguage", "newfamily", "newfam", "newhelp", "then", "begcsname", "autorule", "strippedcsname", "checkedstrippedcsname", "nofarguments", "firstargumentfalse", "firstargumenttrue", "secondargumentfalse", "secondargumenttrue", "thirdargumentfalse", "thirdargumenttrue", "fourthargumentfalse", "fourthargumenttrue", "fifthargumentfalse", "fifthargumenttrue", "sixthargumentfalse", "sixthargumenttrue", "seventhargumentfalse", "seventhargumenttrue", "vkern", "hkern", "vpenalty", "hpenalty", "doglobal", "dodoglobal", "redoglobal", "resetglobal", "donothing", "untraceddonothing", "dontcomplain", "lessboxtracing", "forgetall", "donetrue", "donefalse", "foundtrue", "foundfalse", "inlineordisplaymath", "indisplaymath", "forcedisplaymath", "startforceddisplaymath", "stopforceddisplaymath", "startpickupmath", "stoppickupmath", "reqno", "mathortext", "thebox", "htdp", "unvoidbox", "hfilll", "vfilll", "mathbox", "mathlimop", "mathnolop", "mathnothing", "mathalpha", "currentcatcodetable", "defaultcatcodetable", "catcodetablename", "newcatcodetable", "startcatcodetable", "stopcatcodetable", "startextendcatcodetable", "stopextendcatcodetable", "pushcatcodetable", "popcatcodetable", "restorecatcodes", "setcatcodetable", "letcatcodecommand", "defcatcodecommand", "uedcatcodecommand", "hglue", "vglue", "hfillneg", "vfillneg", "hfilllneg", "vfilllneg", "ruledhss", "ruledhfil", "ruledhfill", "ruledhfilll", "ruledhfilneg", "ruledhfillneg", "normalhfillneg", "normalhfilllneg", "ruledvss", "ruledvfil", "ruledvfill", "ruledvfilll", "ruledvfilneg", "ruledvfillneg", "normalvfillneg", "normalvfilllneg", "ruledhbox", "ruledvbox", "ruledvtop", "ruledvcenter", "ruledmbox", "ruledhpack", "ruledvpack", "ruledtpack", "ruledhskip", "ruledvskip", "ruledkern", "ruledmskip", "ruledmkern", "ruledhglue", "ruledvglue", "normalhglue", "normalvglue", "ruledpenalty", "filledhboxb", "filledhboxr", "filledhboxg", "filledhboxc", "filledhboxm", "filledhboxy", "filledhboxk", "scratchstring", "scratchstringone", "scratchstringtwo", "tempstring", "scratchcounter", "globalscratchcounter", "privatescratchcounter", "scratchdimen", "globalscratchdimen", "privatescratchdimen", "scratchskip", "globalscratchskip", "privatescratchskip", "scratchmuskip", "globalscratchmuskip", "privatescratchmuskip", "scratchtoks", "globalscratchtoks", "privatescratchtoks", "scratchbox", "globalscratchbox", "privatescratchbox", "scratchmacro", "scratchmacroone", "scratchmacrotwo", "scratchconditiontrue", "scratchconditionfalse", "ifscratchcondition", "scratchconditiononetrue", "scratchconditiononefalse", "ifscratchconditionone", "scratchconditiontwotrue", "scratchconditiontwofalse", "ifscratchconditiontwo", "globalscratchcounterone", "globalscratchcountertwo", "globalscratchcounterthree", "groupedcommand", "groupedcommandcs", "triggergroupedcommand", "triggergroupedcommandcs", "simplegroupedcommand", "simplegroupedcommandcs", "pickupgroupedcommand", "pickupgroupedcommandcs", "usedbaselineskip", "usedlineskip", "usedlineskiplimit", "availablehsize", "localhsize", "setlocalhsize", "distributedhsize", "hsizefraction", "next", "nexttoken", "nextbox", "dowithnextbox", "dowithnextboxcs", "dowithnextboxcontent", "dowithnextboxcontentcs", "flushnextbox", "boxisempty", "boxtostring", "contentostring", "prerolltostring", "givenwidth", "givenheight", "givendepth", "scangivendimensions", "scratchwidth", "scratchheight", "scratchdepth", "scratchoffset", "scratchdistance", "scratchtotal", "scratchhsize", "scratchvsize", "scratchxoffset", "scratchyoffset", "scratchhoffset", "scratchvoffset", "scratchxposition", "scratchyposition", "scratchtopoffset", "scratchbottomoffset", "scratchleftoffset", "scratchrightoffset", "scratchcounterone", "scratchcountertwo", "scratchcounterthree", "scratchcounterfour", "scratchcounterfive", "scratchcountersix", "scratchdimenone", "scratchdimentwo", "scratchdimenthree", "scratchdimenfour", "scratchdimenfive", "scratchdimensix", "scratchskipone", "scratchskiptwo", "scratchskipthree", "scratchskipfour", "scratchskipfive", "scratchskipsix", "scratchmuskipone", "scratchmuskiptwo", "scratchmuskipthree", "scratchmuskipfour", "scratchmuskipfive", "scratchmuskipsix", "scratchtoksone", "scratchtokstwo", "scratchtoksthree", "scratchtoksfour", "scratchtoksfive", "scratchtokssix", "scratchboxone", "scratchboxtwo", "scratchboxthree", "scratchboxfour", "scratchboxfive", "scratchboxsix", "scratchnx", "scratchny", "scratchmx", "scratchmy", "scratchunicode", "scratchmin", "scratchmax", "scratchleftskip", "scratchrightskip", "scratchtopskip", "scratchbottomskip", "doif", "doifnot", "doifelse", "firstinset", "doifinset", "doifnotinset", "doifelseinset", "doifinsetelse", "doifelsenextchar", "doifnextcharelse", "doifelsenextcharcs", "doifnextcharcselse", "doifelsenextoptional", "doifnextoptionalelse", "doifelsenextoptionalcs", "doifnextoptionalcselse", "doifelsefastoptionalcheck", "doiffastoptionalcheckelse", "doifelsefastoptionalcheckcs", "doiffastoptionalcheckcselse", "doifelsenextbgroup", "doifnextbgroupelse", "doifelsenextbgroupcs", "doifnextbgroupcselse", "doifelsenextparenthesis", "doifnextparenthesiselse", "doifelseundefined", "doifundefinedelse", "doifelsedefined", "doifdefinedelse", "doifundefined", "doifdefined", "doifelsevalue", "doifvalue", "doifnotvalue", "doifnothing", "doifsomething", "doifelsenothing", "doifnothingelse", "doifelsesomething", "doifsomethingelse", "doifvaluenothing", "doifvaluesomething", "doifelsevaluenothing", "doifvaluenothingelse", "doifelsedimension", "doifdimensionelse", "doifelsenumber", "doifnumberelse", "doifnumber", "doifnotnumber", "doifelsecommon", "doifcommonelse", "doifcommon", "doifnotcommon", "doifinstring", "doifnotinstring", "doifelseinstring", "doifinstringelse", "doifelseassignment", "doifassignmentelse", "docheckassignment", "doifelseassignmentcs", "doifassignmentelsecs", "validassignment", "novalidassignment", "doiftext", "doifelsetext", "doiftextelse", "doifnottext", "quitcondition", "truecondition", "falsecondition", "tracingall", "tracingnone", "loggingall", "tracingcatcodes", "showluatokens", "aliasmacro", "removetoks", "appendtoks", "prependtoks", "appendtotoks", "prependtotoks", "to", "endgraf", "endpar", "reseteverypar", "finishpar", "empty", "null", "space", "quad", "enspace", "emspace", "charspace", "nbsp", "crlf", "obeyspaces", "obeylines", "obeytabs", "obeypages", "obeyedspace", "obeyedline", "obeyedtab", "obeyedpage", "normalspace", "naturalspace", "controlspace", "normalspaces", "ignoretabs", "ignorelines", "ignorepages", "ignoreeofs", "setcontrolspaces", "executeifdefined", "singleexpandafter", "doubleexpandafter", "tripleexpandafter", "dontleavehmode", "removelastspace", "removeunwantedspaces", "keepunwantedspaces", "removepunctuation", "ignoreparskip", "forcestrutdepth", "onlynonbreakablespace", "wait", "writestatus", "define", "defineexpandable", "redefine", "setmeasure", "setemeasure", "setgmeasure", "setxmeasure", "definemeasure", "freezemeasure", "measure", "measured", "directmeasure", "setquantity", "setequantity", "setgquantity", "setxquantity", "definequantity", "freezequantity", "quantity", "quantitied", "directquantity", "installcorenamespace", "getvalue", "getuvalue", "setvalue", "setevalue", "setgvalue", "setxvalue", "letvalue", "letgvalue", "resetvalue", "undefinevalue", "ignorevalue", "setuvalue", "setuevalue", "setugvalue", "setuxvalue", "globallet", "udef", "ugdef", "uedef", "uxdef", "checked", "unique", "getparameters", "geteparameters", "getgparameters", "getxparameters", "forgetparameters", "copyparameters", "getdummyparameters", "dummyparameter", "directdummyparameter", "setdummyparameter", "letdummyparameter", "setexpandeddummyparameter", "usedummystyleandcolor", "usedummystyleparameter", "usedummycolorparameter", "processcommalist", "processcommacommand", "quitcommalist", "quitprevcommalist", "processaction", "processallactions", "processfirstactioninset", "processallactionsinset", "unexpanded", "expanded", "startexpanded", "stopexpanded", "protect", "unprotect", "firstofoneargument", "firstoftwoarguments", "secondoftwoarguments", "firstofthreearguments", "secondofthreearguments", "thirdofthreearguments", "firstoffourarguments", "secondoffourarguments", "thirdoffourarguments", "fourthoffourarguments", "firstoffivearguments", "secondoffivearguments", "thirdoffivearguments", "fourthoffivearguments", "fifthoffivearguments", "firstofsixarguments", "secondofsixarguments", "thirdofsixarguments", "fourthofsixarguments", "fifthofsixarguments", "sixthofsixarguments", "firstofoneunexpanded", "firstoftwounexpanded", "secondoftwounexpanded", "firstofthreeunexpanded", "secondofthreeunexpanded", "thirdofthreeunexpanded", "gobbleoneargument", "gobbletwoarguments", "gobblethreearguments", "gobblefourarguments", "gobblefivearguments", "gobblesixarguments", "gobblesevenarguments", "gobbleeightarguments", "gobbleninearguments", "gobbletenarguments", "gobbleoneoptional", "gobbletwooptionals", "gobblethreeoptionals", "gobblefouroptionals", "gobblefiveoptionals", "dorecurse", "doloop", "exitloop", "dostepwiserecurse", "recurselevel", "recursedepth", "dofastloopcs", "fastloopindex", "fastloopfinal", "dowith", "doloopovermatch", "doloopovermatched", "doloopoverlist", "newconstant", "setnewconstant", "setconstant", "setconstantvalue", "newconditional", "settrue", "setfalse", "settruevalue", "setfalsevalue", "setconditional", "newmacro", "setnewmacro", "newfraction", "newsignal", "dosingleempty", "dodoubleempty", "dotripleempty", "doquadrupleempty", "doquintupleempty", "dosixtupleempty", "doseventupleempty", "dosingleargument", "dodoubleargument", "dotripleargument", "doquadrupleargument", "doquintupleargument", "dosixtupleargument", "doseventupleargument", "dosinglegroupempty", "dodoublegroupempty", "dotriplegroupempty", "doquadruplegroupempty", "doquintuplegroupempty", "permitspacesbetweengroups", "dontpermitspacesbetweengroups", "nopdfcompression", "maximumpdfcompression", "normalpdfcompression", "onlypdfobjectcompression", "nopdfobjectcompression", "modulonumber", "dividenumber", "getfirstcharacter", "doifelsefirstchar", "doiffirstcharelse", "startnointerference", "stopnointerference", "twodigits", "threedigits", "leftorright", "offinterlineskip", "oninterlineskip", "nointerlineskip", "strut", "halfstrut", "quarterstrut", "depthstrut", "halflinestrut", "noheightstrut", "setstrut", "strutbox", "strutht", "strutdp", "strutwd", "struthtdp", "strutgap", "begstrut", "endstrut", "lineheight", "leftboundary", "rightboundary", "signalcharacter", "aligncontentleft", "aligncontentmiddle", "aligncontentright", "shiftbox", "vpackbox", "hpackbox", "vpackedbox", "hpackedbox", "ordordspacing", "ordopspacing", "ordbinspacing", "ordrelspacing", "ordopenspacing", "ordclosespacing", "ordpunctspacing", "ordinnerspacing", "opordspacing", "opopspacing", "opbinspacing", "oprelspacing", "opopenspacing", "opclosespacing", "oppunctspacing", "opinnerspacing", "binordspacing", "binopspacing", "binbinspacing", "binrelspacing", "binopenspacing", "binclosespacing", "binpunctspacing", "bininnerspacing", "relordspacing", "relopspacing", "relbinspacing", "relrelspacing", "relopenspacing", "relclosespacing", "relpunctspacing", "relinnerspacing", "openordspacing", "openopspacing", "openbinspacing", "openrelspacing", "openopenspacing", "openclosespacing", "openpunctspacing", "openinnerspacing", "closeordspacing", "closeopspacing", "closebinspacing", "closerelspacing", "closeopenspacing", "closeclosespacing", "closepunctspacing", "closeinnerspacing", "punctordspacing", "punctopspacing", "punctbinspacing", "punctrelspacing", "punctopenspacing", "punctclosespacing", "punctpunctspacing", "punctinnerspacing", "innerordspacing", "inneropspacing", "innerbinspacing", "innerrelspacing", "inneropenspacing", "innerclosespacing", "innerpunctspacing", "innerinnerspacing", "normalreqno", "startimath", "stopimath", "normalstartimath", "normalstopimath", "startdmath", "stopdmath", "normalstartdmath", "normalstopdmath", "normalsuperscript", "normalsubscript", "normalnosuperscript", "normalnosubscript", "superscript", "subscript", "nosuperscript", "nosubscript", "superprescript", "subprescript", "nosuperprescript", "nosubsprecript", "uncramped", "cramped", "mathstyletrigger", "triggermathstyle", "mathstylefont", "mathsmallstylefont", "mathstyleface", "mathsmallstyleface", "mathstylecommand", "mathpalette", "mathstylehbox", "mathstylevbox", "mathstylevcenter", "mathstylevcenteredhbox", "mathstylevcenteredvbox", "mathtext", "setmathsmalltextbox", "setmathtextbox", "pushmathstyle", "popmathstyle", "triggerdisplaystyle", "triggertextstyle", "triggerscriptstyle", "triggerscriptscriptstyle", "triggeruncrampedstyle", "triggercrampedstyle", "triggersmallstyle", "triggeruncrampedsmallstyle", "triggercrampedsmallstyle", "triggerbigstyle", "triggeruncrampedbigstyle", "triggercrampedbigstyle", "luaexpr", "expelsedoif", "expdoif", "expdoifnot", "expdoifelsecommon", "expdoifcommonelse", "expdoifelseinset", "expdoifinsetelse", "ctxdirectlua", "ctxlatelua", "ctxsprint", "ctxwrite", "ctxcommand", "ctxdirectcommand", "ctxlatecommand", "ctxreport", "ctxlua", "luacode", "lateluacode", "directluacode", "registerctxluafile", "ctxloadluafile", "luaversion", "luamajorversion", "luaminorversion", "ctxluacode", "luaconditional", "luaexpanded", "ctxluamatch", "startluaparameterset", "stopluaparameterset", "luaparameterset", "definenamedlua", "obeylualines", "obeyluatokens", "startluacode", "stopluacode", "startlua", "stoplua", "startctxfunction", "stopctxfunction", "ctxfunction", "startctxfunctiondefinition", "stopctxfunctiondefinition", "installctxfunction", "installprotectedctxfunction", "installprotectedctxscanner", "installctxscanner", "resetctxscanner", "cldprocessfile", "cldloadfile", "cldloadviafile", "cldcontext", "cldcommand", "carryoverpar", "freezeparagraphproperties", "defrostparagraphproperties", "setparagraphfreezing", "forgetparagraphfreezing", "updateparagraphproperties", "updateparagraphpenalties", "updateparagraphdemerits", "updateparagraphshapes", "updateparagraphlines", "lastlinewidth", "assumelongusagecs", "Umathbotaccent", "Umathtopaccent", "righttolefthbox", "lefttorighthbox", "righttoleftvbox", "lefttorightvbox", "righttoleftvtop", "lefttorightvtop", "rtlhbox", "ltrhbox", "rtlvbox", "ltrvbox", "rtlvtop", "ltrvtop", "autodirhbox", "autodirvbox", "autodirvtop", "leftorrighthbox", "leftorrightvbox", "leftorrightvtop", "lefttoright", "righttoleft", "checkedlefttoright", "checkedrighttoleft", "synchronizelayoutdirection", "synchronizedisplaydirection", "synchronizeinlinedirection", "dirlre", "dirrle", "dirlro", "dirrlo", "lesshyphens", "morehyphens", "nohyphens", "dohyphens", "dohyphencollapsing", "nohyphencollapsing", "compounddiscretionary", "Ucheckedstartdisplaymath", "Ucheckedstopdisplaymath", "break", "nobreak", "allowbreak", "goodbreak", "nospace", "nospacing", "dospacing", "naturalhbox", "naturalvbox", "naturalvtop", "naturalhpack", "naturalvpack", "naturaltpack", "reversehbox", "reversevbox", "reversevtop", "reversehpack", "reversevpack", "reversetpack", "hcontainer", "vcontainer", "tcontainer", "frule", "compoundhyphenpenalty", "start", "stop", "unsupportedcs", "openout", "closeout", "write", "openin", "closein", "read", "readline", "readfromterminal", "boxlines", "boxline", "setboxline", "copyboxline", "boxlinewd", "boxlineht", "boxlinedp", "boxlinenw", "boxlinenh", "boxlinend", "boxlinels", "boxliners", "boxlinelh", "boxlinerh", "boxlinelp", "boxlinerp", "boxlinein", "boxrangewd", "boxrangeht", "boxrangedp", "bitwiseset", "bitwiseand", "bitwiseor", "bitwisexor", "bitwisenot", "bitwisenil", "ifbitwiseand", "bitwise", "bitwiseshift", "bitwiseflip", "textdir", "linedir", "pardir", "boxdir", "prelistbox", "postlistbox", "prelistcopy", "postlistcopy", "setprelistbox", "setpostlistbox", "noligaturing", "nokerning", "noexpansion", "noprotrusion", "noleftkerning", "noleftligaturing", "norightkerning", "norightligaturing", "noitaliccorrection", "futureletnexttoken", "defbackslashbreak", "letbackslashbreak", "pushoverloadmode", "popoverloadmode", "pushrunstate", "poprunstate", "suggestedalias", "showboxhere", "discoptioncodestring", "flagcodestring", "frozenparcodestring", "glyphoptioncodestring", "groupcodestring", "hyphenationcodestring", "mathcontrolcodestring", "mathflattencodestring", "normalizecodestring", "parcontextcodestring", "newlocalcount", "newlocaldimen", "newlocalskip", "newlocalmuskip", "newlocaltoks", "newlocalbox", "newlocalwrite", "newlocalread", "setnewlocalcount", "setnewlocaldimen", "setnewlocalskip", "setnewlocalmuskip", "setnewlocaltoks", "setnewlocalbox", "ifexpression" }, } \ No newline at end of file diff --git a/context/data/scite/context/lexers/lexer.lua b/context/data/scite/context/lexers/lexer.lua deleted file mode 100644 index 9582f6a76..000000000 --- a/context/data/scite/context/lexers/lexer.lua +++ /dev/null @@ -1,3 +0,0 @@ --- this works ok: - -return require("scite-context-lexer") diff --git a/context/data/scite/context/lexers/scite-context-lexer-bibtex.lua b/context/data/scite/context/lexers/scite-context-lexer-bibtex.lua index b53da82ea..09c67548f 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-bibtex.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-bibtex.lua @@ -10,15 +10,13 @@ local global, string, table, lpeg = _G, string, table, lpeg local P, R, S, V = lpeg.P, lpeg.R, lpeg.S, lpeg.V local type = type -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token -local exact_match = lexer.exact_match +local patterns = lexers.patterns +local token = lexers.token -local bibtexlexer = lexer.new("bib","scite-context-lexer-bibtex") -local whitespace = bibtexlexer.whitespace +local bibtexlexer = lexers.new("bib","scite-context-lexer-bibtex") +local bibtexwhitespace = bibtexlexer.whitespace local escape, left, right = P("\\"), P('{'), P('}') @@ -52,8 +50,8 @@ local d_quoted = ((escape*double) + (1-double))^0 local balanced = patterns.balanced -local t_spacing = token(whitespace, space^1) -local t_optionalws = token("default", space^1)^0 +local t_spacing = token(bibtexwhitespace,space^1) +local t_optionalws = token("default",space^1)^0 local t_equal = token("operator",equal) local t_left = token("grouping",left) @@ -127,18 +125,18 @@ local t_rest = token("default",anything) -- to some extend but not in all cases (e.g. editing inside line fails) .. maybe i need to -- patch the dll ... (better not) -local dummylexer = lexer.load("scite-context-lexer-dummy","bib-dum") +local dummylexer = lexers.load("scite-context-lexer-dummy","bib-dum") local dummystart = token("embedded",P("\001")) -- an unlikely to be used character local dummystop = token("embedded",P("\002")) -- an unlikely to be used character -lexer.embed_lexer(bibtexlexer,dummylexer,dummystart,dummystop) +lexers.embed(bibtexlexer,dummylexer,dummystart,dummystop) -- maybe we need to define each functional block as lexer (some 4) so i'll do that when -- this issue is persistent ... maybe consider making a local lexer options (not load, --- just lexer.new or so) .. or maybe do the reverse, embed the main one in a dummy child +-- just lexers.new or so) .. or maybe do the reverse, embed the main one in a dummy child -bibtexlexer._rules = { +bibtexlexer.rules = { { "whitespace", t_spacing }, { "forget", t_forget }, { "shortcut", t_shortcut }, @@ -165,7 +163,7 @@ bibtexlexer._rules = { -- * t_optionalws -- * t_comma -- --- bibtexlexer._rules = { +-- bibtexlexer.rules = { -- { "whitespace", t_spacing }, -- { "assignment", t_assignment }, -- { "definition", t_definition }, @@ -177,19 +175,9 @@ bibtexlexer._rules = { -- { "rest", t_rest }, -- } -bibtexlexer._tokenstyles = context.styleset - -bibtexlexer._foldpattern = P("{") + P("}") - -bibtexlexer._foldsymbols = { - _patterns = { - "{", - "}", - }, - ["grouping"] = { - ["{"] = 1, - ["}"] = -1, - }, +bibtexlexer.folding = { + ["{"] = { ["grouping"] = 1 }, + ["}"] = { ["grouping"] = -1 }, } return bibtexlexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-bnf.lua b/context/data/scite/context/lexers/scite-context-lexer-bnf.lua index ce57642ba..9a5483850 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-bnf.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-bnf.lua @@ -8,18 +8,16 @@ local info = { -- will replace the one in metafun -local global, lpeg = _G, lpeg +local lpeg = lpeg local P, R, S = lpeg.P, lpeg.R, lpeg.S -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token -local exact_match = lexer.exact_match +local patterns = lexers.patterns +local token = lexers.token -local bnflexer = lexer.new("bnf","scite-context-lexer-bnf") -local whitespace = bnflexer.whitespace +local bnflexer = lexers.new("bnf","scite-context-lexer-bnf") +local bnfwhitespace = bnflexer.whitespace -- from wikipedia: -- @@ -58,7 +56,7 @@ local extra = P("|") local single = P("'") local double = P('"') -local t_spacing = token(whitespace,space^1) +local t_spacing = token(bnfwhitespace,space^1) local t_term = token("command",left) * token("text",name) * token("command",right) @@ -72,7 +70,7 @@ local t_becomes = token("operator",becomes) local t_extra = token("extra",extra) local t_rest = token("default",anything) -bnflexer._rules = { +bnflexer.rules = { { "whitespace", t_spacing }, { "term", t_term }, { "text", t_text }, @@ -81,19 +79,9 @@ bnflexer._rules = { { "rest", t_rest }, } -bnflexer._tokenstyles = context.styleset - -bnflexer._foldpattern = left + right - -bnflexer._foldsymbols = { - _patterns = { - "<", - ">", - }, - ["grouping"] = { - ["<"] = 1, - [">"] = -1, - }, +bnflexer.folding = { + ["<"] = { ["grouping"] = 1 }, + [">"] = { ["grouping"] = -1 }, } return bnflexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-cld.lua b/context/data/scite/context/lexers/scite-context-lexer-cld.lua index 7bda7800e..fe0fc9c43 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-cld.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-cld.lua @@ -6,18 +6,19 @@ local info = { license = "see context related readme files", } -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local cldlexer = lexer.new("cld","scite-context-lexer-cld") -local lualexer = lexer.load("scite-context-lexer-lua") +local patterns = lexers.patterns +local token = lexers.token + +local cldlexer = lexers.new("cld","scite-context-lexer-cld") +local lualexer = lexers.load("scite-context-lexer-lua") -- can probably be done nicer now, a bit of a hack -cldlexer._rules = lualexer._rules_cld -cldlexer._tokenstyles = lualexer._tokenstyles -cldlexer._foldsymbols = lualexer._foldsymbols -cldlexer._directives = lualexer._directives +cldlexer.rules = lualexer.rules_cld +cldlexer.embedded = lualexer.embedded +cldlexer.folding = lualexer.folding +cldlexer.directives = lualexer.directives return cldlexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-cpp-web.lua b/context/data/scite/context/lexers/scite-context-lexer-cpp-web.lua index 631a802fe..994634fe5 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-cpp-web.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-cpp-web.lua @@ -6,18 +6,22 @@ local info = { license = "see context related readme files", } -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local cppweblexer = lexer.new("cpp-web","scite-context-lexer-cpp") -local cpplexer = lexer.load("scite-context-lexer-cpp") +local patterns = lexers.patterns +local token = lexers.token + +local cppweblexer = lexers.new("cpp-web","scite-context-lexer-cpp") +local cpplexer = lexers.load("scite-context-lexer-cpp") -- can probably be done nicer now, a bit of a hack -cppweblexer._rules = cpplexer._rules_web -cppweblexer._tokenstyles = cpplexer._tokenstyles -cppweblexer._foldsymbols = cpplexer._foldsymbols -cppweblexer._directives = cpplexer._directives +-- setmetatable(cppweblexer, { __index = cpplexer }) + +cppweblexer.rules = cpplexer.rules_web +cppweblexer.embedded = cpplexer.embedded +-- cppweblexer.whitespace = cpplexer.whitespace +cppweblexer.folding = cpplexer.folding +cppweblexer.directives = cpplexer.directives return cppweblexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-cpp.lua b/context/data/scite/context/lexers/scite-context-lexer-cpp.lua index a50cdaa17..c77843c3b 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-cpp.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-cpp.lua @@ -10,15 +10,13 @@ local info = { local P, R, S = lpeg.P, lpeg.R, lpeg.S -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token -local exact_match = lexer.exact_match +local patterns = lexers.patterns +local token = lexers.token -local cpplexer = lexer.new("cpp","scite-context-lexer-cpp") -local whitespace = cpplexer.whitespace +local cpplexer = lexers.new("cpp","scite-context-lexer-cpp") +local cppwhitespace = cpplexer.whitespace local keywords = { -- copied from cpp.lua -- c @@ -56,6 +54,7 @@ local space = patterns.space -- S(" \n\r\t\f\v") local any = patterns.any local restofline = patterns.restofline local startofline = patterns.startofline +local exactmatch = patterns.exactmatch local squote = P("'") local dquote = P('"') @@ -71,7 +70,7 @@ local decimal = patterns.decimal local float = patterns.float local integer = P("-")^-1 * (hexadecimal + decimal) -- also in patterns ? -local spacing = token(whitespace, space^1) +local spacing = token(cppwhitespace, space^1) local rest = token("default", any) local shortcomment = token("comment", slashes * restofline^0) @@ -93,10 +92,10 @@ local operator = token("special", S("+-*/%^!=<>;:{}[]().&|?~")) ----- optionalspace = spacing^0 -local p_keywords = exact_match(keywords) -local p_datatypes = exact_match(datatypes) -local p_macros = exact_match(macros) -local p_luatexs = exact_match(luatexs) +local p_keywords = exactmatch(keywords) +local p_datatypes = exactmatch(datatypes) +local p_macros = exactmatch(macros) +local p_luatexs = exactmatch(luatexs) local keyword = token("keyword", p_keywords) local datatype = token("keyword", p_datatypes) @@ -105,7 +104,7 @@ local luatex = token("command", p_luatexs) local macro = token("data", #P("#") * startofline * P("#") * S("\t ")^0 * p_macros) -cpplexer._rules = { +cpplexer.rules = { { "whitespace", spacing }, { "keyword", keyword }, { "type", datatype }, @@ -120,13 +119,13 @@ cpplexer._rules = { { "rest", rest }, } -local web = lexer.loadluafile("scite-context-lexer-web-snippets") +local web = lexers.loadluafile("scite-context-lexer-web-snippets") if web then - lexer.inform("supporting web snippets in cpp lexer") + -- lexers.report("supporting web snippets in cpp lexer") - cpplexer._rules_web = { + cpplexer.rules_web = { { "whitespace", spacing }, { "keyword", keyword }, { "type", datatype }, @@ -144,9 +143,9 @@ if web then else - lexer.report("not supporting web snippets in cpp lexer") + -- lexers.report("not supporting web snippets in cpp lexer") - cpplexer._rules_web = { + cpplexer.rules_web = { { "whitespace", spacing }, { "keyword", keyword }, { "type", datatype }, @@ -163,37 +162,17 @@ else end -cpplexer._tokenstyles = context.styleset - -cpplexer._foldpattern = P("/*") + P("*/") + S("{}") -- separate entry else interference (singular?) - -cpplexer._foldsymbols = { - _patterns = { - "[{}]", - "/%*", - "%*/", - }, - -- ["data"] = { -- macro - -- ["region"] = 1, - -- ["endregion"] = -1, - -- ["if"] = 1, - -- ["ifdef"] = 1, - -- ["ifndef"] = 1, - -- ["endif"] = -1, - -- }, - ["special"] = { -- operator - ["{"] = 1, - ["}"] = -1, - }, - ["comment"] = { - ["/*"] = 1, - ["*/"] = -1, - } +cpplexer.folding = { + -- ["region"] = { ["data"] = 1 }, + -- ["endregion"] = { ["data"] = -1 }, + -- ["if"] = { ["data"] = 1 }, + -- ["ifdef"] = { ["data"] = 1 }, + -- ["ifndef"] = { ["data"] = 1 }, + -- ["endif"] = { ["data"] = -1 }, + ["{"] = { ["special"] = 1 }, + ["}"] = { ["special"] = -1 }, + ["/*"] = { ["comment"] = 1 }, + ["*/"] = { ["comment"] = -1 }, } --- -- by indentation: - -cpplexer._foldpatterns = nil -cpplexer._foldsymbols = nil - return cpplexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-dummy.lua b/context/data/scite/context/lexers/scite-context-lexer-dummy.lua index 5d3096b7d..d54ffec22 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-dummy.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-dummy.lua @@ -10,26 +10,23 @@ local info = { -- we need to trigger that, for instance in the bibtex lexer, but still -- we get failed lexing -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token +local patterns = lexers.patterns +local token = lexers.token -local dummylexer = lexer.new("dummy","scite-context-lexer-dummy") -local whitespace = dummylexer.whitespace +local dummylexer = lexers.new("dummy","scite-context-lexer-dummy") +local dummywhitespace = dummylexer.whitespace -local space = patterns.space -local nospace = (1-space) +local space = patterns.space +local nospace = (1-space) -local t_spacing = token(whitespace, space ^1) -local t_rest = token("default", nospace^1) +local t_spacing = token(dummywhitespace, space^1) +local t_rest = token("default", nospace^1) -dummylexer._rules = { +dummylexer.rules = { { "whitespace", t_spacing }, { "rest", t_rest }, } -dummylexer._tokenstyles = context.styleset - return dummylexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-json.lua b/context/data/scite/context/lexers/scite-context-lexer-json.lua index ca7add07d..c648b132a 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-json.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-json.lua @@ -6,19 +6,16 @@ local info = { license = "see context related readme files", } -local global, string, table, lpeg = _G, string, table, lpeg -local P, R, S, V = lpeg.P, lpeg.R, lpeg.S, lpeg.V -local type = type +local lpeg = lpeg +local P, R, S = lpeg.P, lpeg.R, lpeg.S -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token -local exact_match = lexer.exact_match +local patterns = lexers.patterns +local token = lexers.token -local jsonlexer = lexer.new("json","scite-context-lexer-json") -local whitespace = jsonlexer.whitespace +local jsonlexer = lexers.new("json","scite-context-lexer-json") +local jsonwhitespace = jsonlexer.whitespace local anything = patterns.anything local comma = P(",") @@ -48,31 +45,31 @@ local reserved = P("true") local integer = P("-")^-1 * (patterns.hexadecimal + patterns.decimal) local float = patterns.float -local t_number = token("number", float + integer) - * (token("error",R("AZ","az","__")^1))^0 +local t_number = token("number", float + integer) + * (token("error", R("AZ","az","__")^1))^0 -local t_spacing = token(whitespace, space^1) -local t_optionalws = token("default", space^1)^0 +local t_spacing = token("whitespace", space^1) +local t_optionalws = token("default", space^1)^0 -local t_operator = token("special", operator) +local t_operator = token("special", operator) -local t_string = token("operator",double) - * token("string",content) - * token("operator",double) +local t_string = token("operator", double) + * token("string", content) + * token("operator", double) -local t_key = token("operator",double) - * token("text",content) - * token("operator",double) +local t_key = token("operator", double) + * token("text", content) + * token("operator", double) * t_optionalws - * token("operator",colon) + * token("operator", colon) -local t_fences = token("operator",fence) -- grouping +local t_fences = token("operator", fence) -- grouping -local t_reserved = token("primitive",reserved) +local t_reserved = token("primitive", reserved) -local t_rest = token("default",anything) +local t_rest = token("default", anything) -jsonlexer._rules = { +jsonlexer.rules = { { "whitespace", t_spacing }, { "reserved", t_reserved }, { "key", t_key }, @@ -83,19 +80,11 @@ jsonlexer._rules = { { "rest", t_rest }, } -jsonlexer._tokenstyles = context.styleset - -jsonlexer._foldpattern = fence - -jsonlexer._foldsymbols = { - _patterns = { - "{", "}", - "[", "]", - }, - ["grouping"] = { - ["{"] = 1, ["}"] = -1, - ["["] = 1, ["]"] = -1, - }, +jsonlexer.folding = { + ["{"] = { ["grouping"] = 1 }, + ["}"] = { ["grouping"] = -1 }, + ["["] = { ["grouping"] = 1 }, + ["]"] = { ["grouping"] = -1 }, } return jsonlexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-lua-longstring.lua b/context/data/scite/context/lexers/scite-context-lexer-lua-longstring.lua index b1304f65c..5e7fa1256 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-lua-longstring.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-lua-longstring.lua @@ -6,26 +6,25 @@ local info = { license = "see context related readme files", } -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +-- This one is needed because we have spaces in strings and partial lexing depends +-- on them being different. -local token = lexer.token +local lexers = require("scite-context-lexer") -local stringlexer = lexer.new("lua-longstring","scite-context-lexer-lua-longstring") -local whitespace = stringlexer.whitespace +local patterns = lexers.patterns +local token = lexers.token + +local stringlexer = lexers.new("lua-longstring","scite-context-lexer-lua-longstring") local space = patterns.space local nospace = 1 - space -local p_spaces = token(whitespace, space ^1) -local p_string = token("string", nospace^1) +local p_spaces = token("whitespace", space^1) +local p_string = token("string", nospace^1) -stringlexer._rules = { +stringlexer.rules = { { "whitespace", p_spaces }, { "string", p_string }, } -stringlexer._tokenstyles = context.styleset - return stringlexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-lua.lua b/context/data/scite/context/lexers/scite-context-lexer-lua.lua index 0e54d56ba..41219b66e 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-lua.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-lua.lua @@ -6,33 +6,23 @@ local info = { license = "see context related readme files", } --- beware: all multiline is messy, so even if it's no lexer, it should be an embedded lexer --- we probably could use a local whitespace variant but this is cleaner - local P, R, S, C, Cmt, Cp = lpeg.P, lpeg.R, lpeg.S, lpeg.C, lpeg.Cmt, lpeg.Cp local match, find = string.match, string.find local setmetatable = setmetatable -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns - -local token = lexer.token -local exact_match = lexer.exact_match -local just_match = lexer.just_match +local lexers = require("scite-context-lexer") -local lualexer = lexer.new("lua","scite-context-lexer-lua") -local whitespace = lualexer.whitespace +local patterns = lexers.patterns +local token = lexers.token -local stringlexer = lexer.load("scite-context-lexer-lua-longstring") ------ labellexer = lexer.load("scite-context-lexer-lua-labelstring") +local lualexer = lexers.new("lua","scite-context-lexer-lua") -local directives = { } -- communication channel +local luawhitespace = lualexer.whitespace --- this will be extended +local stringlexer = lexers.load("scite-context-lexer-lua-longstring") +----- labellexer = lexers.load("scite-context-lexer-lua-labelstring") --- we could combine some in a hash that returns the class that then makes the token --- this can save time on large files +local directives = { } -- communication channel local keywords = { "and", "break", "do", "else", "elseif", "end", "false", "for", "function", -- "goto", @@ -61,12 +51,6 @@ local constants = { "", "", } --- local tokenmappings = { } --- --- for i=1,#keywords do tokenmappings[keywords [i]] = "keyword" } --- for i=1,#functions do tokenmappings[functions[i]] = "function" } --- for i=1,#constants do tokenmappings[constants[i]] = "constant" } - local internals = { -- __ "add", "call", "concat", "div", "idiv", "eq", "gc", "index", "le", "lt", "metatable", "mode", "mul", "newindex", @@ -113,16 +97,16 @@ local longtwostring = P(function(input,index) end end) - local longtwostring_body = longtwostring +local longtwostring_body = longtwostring - local longtwostring_end = P(function(input,index) - if level then - -- local sentinel = "]" .. level .. "]" - local sentinel = sentinels[level] - local _, stop = find(input,sentinel,index,true) - return stop and stop + 1 or #input + 1 - end - end) +local longtwostring_end = P(function(input,index) + if level then + -- local sentinel = "]" .. level .. "]" + local sentinel = sentinels[level] + local _, stop = find(input,sentinel,index,true) + return stop and stop + 1 or #input + 1 + end +end) local longcomment = Cmt(#("[[" + ("[" * C(equals) * "[")), function(input,index,level) -- local sentinel = "]" .. level .. "]" @@ -134,14 +118,16 @@ end) local space = patterns.space -- S(" \n\r\t\f\v") local any = patterns.any local eol = patterns.eol +local exactmatch = patterns.exactmatch +local justmatch = patterns.justmatch local squote = P("'") local dquote = P('"') local escaped = P("\\") * P(1) local dashes = P("--") -local spacing = token(whitespace, space^1) -local rest = token("default", any) +local spacing = token(luawhitespace, space^1) +local rest = token("default", any) local shortcomment = token("comment", dashes * (1-eol)^0) local longcomment = token("comment", dashes * longcomment) @@ -166,7 +152,7 @@ local shortstring = token("quote", dquote) local string = shortstring ----- + longstring -lexer.embed_lexer(lualexer, stringlexer, token("quote",longtwostart), token("string",longtwostring_body) * token("quote",longtwostring_end)) +lexers.embed(lualexer, stringlexer, token("quote",longtwostart), token("string",longtwostring_body) * token("quote",longtwostring_end)) local integer = P("-")^-1 * (patterns.hexadecimal + patterns.decimal) local number = token("number", patterns.float + integer) @@ -187,8 +173,6 @@ local identifier = token("default",validword) ----- operator = token("special", S('+-*/%^#=<>;:,{}[]().') + P('~=') ) -- no ^1 because of nested lexers local operator = token("special", S('+-*/%^#=<>;:,{}[]().|~')) -- no ^1 because of nested lexers -local structure = token("special", S('{}[]()')) - local optionalspace = spacing^0 local hasargument = #S("{([") @@ -203,20 +187,15 @@ local gotolabel = token("keyword", P("::")) * (spacing + shortcomment)^0 * token("keyword", P("::")) ------ p_keywords = exact_match(keywords) ------ p_functions = exact_match(functions) ------ p_constants = exact_match(constants) ------ p_internals = P("__") ------ * exact_match(internals) +local p_keywords = exactmatch(keywords) +local p_functions = exactmatch(functions) +local p_constants = exactmatch(constants) +local p_internals = P("__") + * exactmatch(internals) local p_finish = #(1-R("az","AZ","__")) -local p_keywords = lexer.helpers.utfchartabletopattern(keywords) * p_finish -- exact_match(keywords) -local p_functions = lexer.helpers.utfchartabletopattern(functions) * p_finish -- exact_match(functions) -local p_constants = lexer.helpers.utfchartabletopattern(constants) * p_finish -- exact_match(constants) -local p_internals = P("__") - * lexer.helpers.utfchartabletopattern(internals) * p_finish -- exact_match(internals) -local p_csnames = lexer.helpers.utfchartabletopattern(csnames) -- * p_finish -- just_match(csnames) +local p_csnames = justmatch(csnames) local p_ctnames = P("ctx") * R("AZ","az","__")^0 local keyword = token("keyword", p_keywords) local builtin = token("plain", p_functions) @@ -237,23 +216,11 @@ local identifier = token("default", validword) token("default", validword ) ) )^0 --- local t = { } for k, v in next, tokenmappings do t[#t+1] = k end t = table.concat(t) --- -- local experimental = (S(t)^1) / function(s) return tokenmappings[s] end * Cp() --- --- local experimental = Cmt(S(t)^1, function(_,i,s) --- local t = tokenmappings[s] --- if t then --- return true, t, i --- end --- end) - -lualexer._rules = { +lualexer.rules = { { "whitespace", spacing }, { "keyword", keyword }, -- can be combined - -- { "structure", structure }, { "function", builtin }, -- can be combined { "constant", constant }, -- can be combined - -- { "experimental", experimental }, -- works but better split { "csname", csname }, { "goto", gotokeyword }, { "identifier", identifier }, @@ -266,82 +233,29 @@ lualexer._rules = { { "rest", rest }, } --- -- experiment --- --- local idtoken = R("az","AZ","__") --- --- function context.one_of_match(specification) --- local pattern = idtoken -- the concat catches _ etc --- local list = { } --- for i=1,#specification do --- local style = specification[i][1] --- local words = specification[i][2] --- pattern = pattern + S(table.concat(words)) --- for i=1,#words do --- list[words[i]] = style --- end --- end --- return Cmt(pattern^1, function(_,i,s) --- local style = list[s] --- if style then --- return true, { style, i } -- and i or nil --- else --- -- fail --- end --- end) --- end --- --- local whatever = context.one_of_match { --- { "keyword", keywords }, -- keyword --- { "plain", functions }, -- builtin --- { "data", constants }, -- constant --- } --- --- lualexer._rules = { --- { "whitespace", spacing }, --- { "whatever", whatever }, --- { "csname", csname }, --- { "goto", gotokeyword }, --- { "identifier", identifier }, --- { "string", string }, --- { "number", number }, --- { "longcomment", longcomment }, --- { "shortcomment", shortcomment }, --- { "label", gotolabel }, --- { "operator", operator }, --- { "rest", rest }, --- } - -lualexer._tokenstyles = context.styleset - --- lualexer._foldpattern = R("az")^2 + S("{}[]") -- separate entry else interference - -lualexer._foldpattern = (P("end") + P("if") + P("do") + P("function") + P("repeat") + P("until")) * P(#(1 - R("az"))) - + S("{}[]") - -lualexer._foldsymbols = { - _patterns = { - "[a-z][a-z]+", - "[{}%[%]]", +lualexer.folding = { + -- challenge: if=0 then=1 else=-1 elseif=-1 + ["if"] = { ["keyword"] = 1 }, -- if .. [then|else] .. end + ["do"] = { ["keyword"] = 1 }, -- [while] do .. end + ["function"] = { ["keyword"] = 1 }, -- function .. end + ["repeat"] = { ["keyword"] = 1 }, -- repeat .. until + ["until"] = { ["keyword"] = -1 }, + ["end"] = { ["keyword"] = -1 }, + -- ["else"] = { ["keyword"] = 1 }, + -- ["elseif"] = { ["keyword"] = 1 }, -- already catched by if + -- ["elseif"] = { ["keyword"] = 0 }, + ["["] = { + ["comment"] = 1, + -- ["quote"] = 1, -- confusing }, - ["keyword"] = { -- challenge: if=0 then=1 else=-1 elseif=-1 - ["if"] = 1, -- if .. [then|else] .. end - ["do"] = 1, -- [while] do .. end - ["function"] = 1, -- function .. end - ["repeat"] = 1, -- repeat .. until - ["until"] = -1, - ["end"] = -1, - }, - ["comment"] = { - ["["] = 1, ["]"] = -1, - }, - -- ["quote"] = { -- confusing - -- ["["] = 1, ["]"] = -1, - -- }, - ["special"] = { - -- ["("] = 1, [")"] = -1, - ["{"] = 1, ["}"] = -1, + ["]"] = { + ["comment"] = -1 + -- ["quote"] = -1, -- confusing }, + -- ["("] = { ["special"] = 1 }, + -- [")"] = { ["special"] = -1 }, + ["{"] = { ["special"] = 1 }, + ["}"] = { ["special"] = -1 }, } -- embedded in tex: @@ -360,24 +274,15 @@ local texstring = token("quote", longthreestart) * token("string", longthreestring) * token("quote", longthreestop) ------ texcommand = token("user", texcsname) local texcommand = token("warning", texcsname) --- local texstring = token("quote", longthreestart) --- * (texcommand + token("string",P(1-texcommand-longthreestop)^1) - longthreestop)^0 -- we match long non-\cs sequences --- * token("quote", longthreestop) - --- local whitespace = "whitespace" --- local spacing = token(whitespace, space^1) - -lualexer._directives = directives +lualexer.directives = directives -lualexer._rules_cld = { +lualexer.rules_cld = { { "whitespace", spacing }, { "texstring", texstring }, { "texcomment", texcomment }, { "texcommand", texcommand }, - -- { "structure", structure }, { "keyword", keyword }, { "function", builtin }, { "csname", csname }, diff --git a/context/data/scite/context/lexers/scite-context-lexer-mps.lua b/context/data/scite/context/lexers/scite-context-lexer-mps.lua index 356bf1f6b..ddf62ecb0 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-mps.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-mps.lua @@ -10,15 +10,13 @@ local global, string, table, lpeg = _G, string, table, lpeg local P, R, S, V = lpeg.P, lpeg.R, lpeg.S, lpeg.V local type = type -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token -local exact_match = lexer.exact_match +local patterns = lexers.patterns +local token = lexers.token -local metafunlexer = lexer.new("mps","scite-context-lexer-mps") -local whitespace = metafunlexer.whitespace +local metafunlexer = lexers.new("mps","scite-context-lexer-mps") +local metafunwhitespace = metafunlexer.whitespace local metapostprimitives = { } local metapostinternals = { } @@ -34,7 +32,7 @@ local mergedinternals = { } do - local definitions = context.loaddefinitions("scite-context-data-metapost") + local definitions = lexers.loaddefinitions("scite-context-data-metapost") if definitions then metapostprimitives = definitions.primitives or { } @@ -43,7 +41,7 @@ do metapostcommands = definitions.commands or { } end - local definitions = context.loaddefinitions("scite-context-data-metafun") + local definitions = lexers.loaddefinitions("scite-context-data-metafun") if definitions then metafuninternals = definitions.internals or { } @@ -69,6 +67,7 @@ end local space = patterns.space -- S(" \n\r\t\f\v") local any = patterns.any +local exactmatch = patterns.exactmatch local dquote = P('"') local cstoken = patterns.idtoken @@ -81,43 +80,44 @@ local cstokentex = R("az","AZ","\127\255") + S("@!?_") -- we could collapse as in tex -local spacing = token(whitespace, space^1) -local rest = token("default", any) -local comment = token("comment", P("%") * (1-S("\n\r"))^0) -local internal = token("reserved", exact_match(mergedshortcuts,false)) -local shortcut = token("data", exact_match(mergedinternals)) - -local helper = token("command", exact_match(metafuncommands)) -local plain = token("plain", exact_match(metapostcommands)) -local quoted = token("quote", dquote) - * token("string", P(1-dquote)^0) - * token("quote", dquote) +local spacing = token(metafunwhitespace, space^1) + +local rest = token("default", any) +local comment = token("comment", P("%") * (1-S("\n\r"))^0) +local internal = token("reserved", exactmatch(mergedshortcuts)) +local shortcut = token("data", exactmatch(mergedinternals)) + +local helper = token("command", exactmatch(metafuncommands)) +local plain = token("plain", exactmatch(metapostcommands)) +local quoted = token("quote", dquote) + * token("string", P(1-dquote)^0) + * token("quote", dquote) local separator = P(" ") + S("\n\r")^1 local btex = (P("btex") + P("verbatimtex")) * separator local etex = separator * P("etex") -local texstuff = token("quote", btex) - * token("string", (1-etex)^0) - * token("quote", etex) -local primitive = token("primitive", exact_match(metapostprimitives)) -local identifier = token("default", cstoken^1) -local number = token("number", number) -local grouping = token("grouping", S("()[]{}")) -- can be an option -local suffix = token("number", P("#@") + P("@#") + P("#")) -local special = token("special", P("#@") + P("@#") + S("#()[]{}<>=:\"")) -- or else := <> etc split -local texlike = token("warning", P("\\") * cstokentex^1) -local extra = token("extra", P("+-+") + P("++") + S("`~%^&_-+*/\'|\\")) +local texstuff = token("quote", btex) + * token("string", (1-etex)^0) + * token("quote", etex) +local primitive = token("primitive", exactmatch(metapostprimitives)) +local identifier = token("default", cstoken^1) +local number = token("number", number) +local grouping = token("grouping", S("()[]{}")) -- can be an option +local suffix = token("number", P("#@") + P("@#") + P("#")) +local special = token("special", P("#@") + P("@#") + S("#()[]{}<>=:\"")) -- or else := <> etc split +local texlike = token("warning", P("\\") * cstokentex^1) +local extra = token("extra", P("+-+") + P("++") + S("`~%^&_-+*/\'|\\")) local nested = P { leftbrace * (V(1) + (1-rightbrace))^0 * rightbrace } -local texlike = token("embedded", P("\\") * (P("MP") + P("mp")) * mptoken^1) +local texlike = token("embedded", P("\\") * (P("MP") + P("mp")) * mptoken^1) * spacing^0 - * token("grouping", leftbrace) - * token("default", (nested + (1-rightbrace))^0 ) - * token("grouping", rightbrace) - + token("warning", P("\\") * cstokentex^1) + * token("grouping", leftbrace) + * token("default", (nested + (1-rightbrace))^0 ) + * token("grouping", rightbrace) + + token("warning", P("\\") * cstokentex^1) -- lua: we assume: lua ( "lua code" ) -local cldlexer = lexer.load("scite-context-lexer-cld","mps-cld") +local cldlexer = lexers.load("scite-context-lexer-cld","mps-cld") local startlua = P("lua") * space^0 * P('(') * space^0 * P('"') local stoplua = P('"') * space^0 * P(')') @@ -125,13 +125,13 @@ local stoplua = P('"') * space^0 * P(')') local startluacode = token("embedded", startlua) local stopluacode = #stoplua * token("embedded", stoplua) -lexer.embed_lexer(metafunlexer, cldlexer, startluacode, stopluacode) +lexers.embed(metafunlexer, cldlexer, startluacode, stopluacode) local luacall = token("embedded",P("lua") * ( P(".") * R("az","AZ","__")^1 )^1) local keyword = token("default", (R("AZ","az","__")^1) * # P(space^0 * P("="))) -metafunlexer._rules = { +metafunlexer.rules = { { "whitespace", spacing }, { "comment", comment }, { "keyword", keyword }, -- experiment, maybe to simple @@ -153,37 +153,24 @@ metafunlexer._rules = { { "rest", rest }, } -metafunlexer._tokenstyles = context.styleset - -metafunlexer._foldpattern = patterns.lower^2 -- separate entry else interference - -metafunlexer._foldsymbols = { - _patterns = { - "[a-z][a-z]+", - }, - ["plain"] = { - ["beginfig"] = 1, - ["endfig"] = -1, - ["beginglyph"] = 1, - ["endglyph"] = -1, - -- ["begingraph"] = 1, - -- ["endgraph"] = -1, - }, - ["primitive"] = { - ["def"] = 1, - ["vardef"] = 1, - ["primarydef"] = 1, - ["secondarydef" ] = 1, - ["tertiarydef"] = 1, - ["enddef"] = -1, - ["if"] = 1, - ["fi"] = -1, - ["for"] = 1, - ["forever"] = 1, - ["endfor"] = -1, - } +metafunlexer.folding = { + ["beginfig"] = { ["plain"] = 1 }, + ["endfig"] = { ["plain"] = -1 }, + ["beginglyph"] = { ["plain"] = 1 }, + ["endglyph"] = { ["plain"] = -1 }, + -- ["begingraph"] = { ["plain"] = 1 }, + -- ["endgraph"] = { ["plain"] = -1 }, + ["def"] = { ["primitive"] = 1 }, + ["vardef"] = { ["primitive"] = 1 }, + ["primarydef"] = { ["primitive"] = 1 }, + ["secondarydef" ] = { ["primitive"] = 1 }, + ["tertiarydef"] = { ["primitive"] = 1 }, + ["enddef"] = { ["primitive"] = -1 }, + ["if"] = { ["primitive"] = 1 }, + ["fi"] = { ["primitive"] = -1 }, + ["for"] = { ["primitive"] = 1 }, + ["forever"] = { ["primitive"] = 1 }, + ["endfor"] = { ["primitive"] = -1 }, } --- if inspect then inspect(metafunlexer) end - return metafunlexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-pdf.lua b/context/data/scite/context/lexers/scite-context-lexer-pdf.lua index 1956071b7..2d691143d 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-pdf.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-pdf.lua @@ -11,17 +11,16 @@ local info = { local P, R, S, V = lpeg.P, lpeg.R, lpeg.S, lpeg.V -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token +local patterns = lexers.patterns +local token = lexers.token -local pdflexer = lexer.new("pdf","scite-context-lexer-pdf") -local whitespace = pdflexer.whitespace +local pdflexer = lexers.new("pdf","scite-context-lexer-pdf") +local pdfwhitespace = pdflexer.whitespace ------ pdfobjectlexer = lexer.load("scite-context-lexer-pdf-object") ------ pdfxreflexer = lexer.load("scite-context-lexer-pdf-xref") +----- pdfobjectlexer = lexers.load("scite-context-lexer-pdf-object") +----- pdfxreflexer = lexers.load("scite-context-lexer-pdf-xref") local anything = patterns.anything local space = patterns.space @@ -30,10 +29,10 @@ local nospacing = patterns.nospacing local anything = patterns.anything local restofline = patterns.restofline -local t_whitespace = token(whitespace, spacing) -local t_spacing = token("default", spacing) ------ t_rest = token("default", nospacing) -local t_rest = token("default", anything) +local t_whitespace = token(pdfwhitespace, spacing) +local t_spacing = token("default", spacing) +----- t_rest = token("default", nospacing) +local t_rest = token("default", anything) local p_comment = P("%") * restofline local t_comment = token("comment", p_comment) @@ -187,7 +186,7 @@ local t_number = token("number", cardinal) -- t_number = token("number", cardinal * spacing * cardinal * spacing) -- * token("keyword", S("fn")) -pdflexer._rules = { +pdflexer.rules = { { "whitespace", t_whitespace }, { "object", t_object }, { "comment", t_comment }, @@ -198,21 +197,15 @@ pdflexer._rules = { { "rest", t_rest }, } -pdflexer._tokenstyles = context.styleset - -- lexer.inspect(pdflexer) -- collapser: obj endobj stream endstream -pdflexer._foldpattern = p_obj + p_endobj + p_stream + p_endstream - -pdflexer._foldsymbols = { - ["keyword"] = { - ["obj"] = 1, - ["endobj"] = -1, - ["stream"] = 1, - ["endstream"] = -1, - }, +pdflexer.folding = { + ["obj"] = { ["keyword"] = 1 }, + ["endobj"] = { ["keyword"] = -1 }, + ["stream"] = { ["keyword"] = 1 }, + ["endstream"] = { ["keyword"] = -1 }, } return pdflexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-sas.lua b/context/data/scite/context/lexers/scite-context-lexer-sas.lua index e36569911..051918bbf 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-sas.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-sas.lua @@ -10,28 +10,27 @@ local info = { local P, R, S = lpeg.P, lpeg.R, lpeg.S -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token -local exact_match = lexer.exact_match +local patterns = lexers.patterns +local token = lexers.token -local saslexer = lexer.new("sas","scite-context-lexer-sAs") -local whitespace = saslexer.whitespace +local saslexer = lexers.new("sas","scite-context-lexer-sAs") +local saswhitespace = saslexer.whitespace local keywords_standard = { - "anova" , "data", "run", "proc", + "anova", "data", "run", "proc", } local keywords_dialects = { - "class" , "do", "end" , "int" , "for" , "model" , "rannor" , "to" , "output" + "class", "do", "end", "int", "for", "model", "rannor", "to", "output", } local space = patterns.space -- S(" \n\r\t\f\v") local any = patterns.any local restofline = patterns.restofline local startofline = patterns.startofline +local exactmatch = patterns.exactmatch local squote = P("'") local dquote = P('"') @@ -45,48 +44,42 @@ local decimal = patterns.decimal local float = patterns.float local integer = P("-")^-1 * decimal -local spacing = token(whitespace, space^1) +local spacing = token(saswhitespace, space^1) + local rest = token("default", any) local shortcomment = token("comment", (P("#") + P("--")) * restofline^0) local longcomment = token("comment", begincomment * (1-endcomment)^0 * endcomment^-1) -local identifier = token("default",lexer.helpers.utfidentifier) +local identifier = token("default", lexer.helpers.utfidentifier) -local shortstring = token("quote", dquote) -- can be shared +local shortstring = token("quote", dquote) -- can be shared * token("string", (escaped + (1-dquote))^0) - * token("quote", dquote) - + token("quote", squote) + * token("quote", dquote) + + token("quote", squote) * token("string", (escaped + (1-squote))^0) - * token("quote", squote) - + token("quote", bquote) + * token("quote", squote) + + token("quote", bquote) * token("string", (escaped + (1-bquote))^0) - * token("quote", bquote) + * token("quote", bquote) + +local p_keywords_s = exactmatch(keywords_standard,true) +local p_keywords_d = exactmatch(keywords_dialects,true) -local p_keywords_s = exact_match(keywords_standard,nil,true) -local p_keywords_d = exact_match(keywords_dialects,nil,true) local keyword_s = token("keyword", p_keywords_s) local keyword_d = token("command", p_keywords_d) local number = token("number", float + integer) local operator = token("special", S("+-*/%^!=<>;:{}[]().&|?~")) -saslexer._tokenstyles = context.styleset - -saslexer._foldpattern = P("/*") + P("*/") + S("{}") -- separate entry else interference - -saslexer._foldsymbols = { - _patterns = { - "/%*", - "%*/", - }, - ["comment"] = { - ["/*"] = 1, - ["*/"] = -1, - } +saslexer.folding = { + ["/*"] = { ["comment"] = 1 }, + ["*/"] = { ["comment"] = -1 }, + -- ["{"] = { ["operator"] = 1 }, + -- ["}"] = { ["operator"] = -1 }, } -saslexer._rules = { +saslexer.rules = { { "whitespace", spacing }, { "keyword-s", keyword_s }, { "keyword-d", keyword_d }, diff --git a/context/data/scite/context/lexers/scite-context-lexer-sql.lua b/context/data/scite/context/lexers/scite-context-lexer-sql.lua index cf0a03331..97a2ea07b 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-sql.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-sql.lua @@ -8,15 +8,13 @@ local info = { local P, R, S = lpeg.P, lpeg.R, lpeg.S -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token -local exact_match = lexer.exact_match +local patterns = lexers.patterns +local token = lexers.token -local sqllexer = lexer.new("sql","scite-context-lexer-sql") -local whitespace = sqllexer.whitespace +local sqllexer = lexers.new("sql","scite-context-lexer-sql") +local sqlwhitespace = sqllexer.whitespace -- ANSI SQL 92 | 99 | 2003 @@ -167,6 +165,7 @@ local space = patterns.space -- S(" \n\r\t\f\v") local any = patterns.any local restofline = patterns.restofline local startofline = patterns.startofline +local exactmatch = patterns.exactmatch local squote = P("'") local dquote = P('"') @@ -180,7 +179,7 @@ local decimal = patterns.decimal local float = patterns.float local integer = P("-")^-1 * decimal -local spacing = token(whitespace, space^1) +local spacing = token(sqlwhitespace, space^1) local rest = token("default", any) local shortcomment = token("comment", (P("#") + P("--")) * restofline^0) @@ -189,40 +188,32 @@ local longcomment = token("comment", begincomment * (1-endcomment)^0 * endcomm local p_validword = R("AZ","az","__") * R("AZ","az","__","09")^0 local identifier = token("default",p_validword) -local shortstring = token("quote", dquote) -- can be shared +local shortstring = token("quote", dquote) -- can be shared * token("string", (escaped + (1-dquote))^0) - * token("quote", dquote) - + token("quote", squote) + * token("quote", dquote) + + token("quote", squote) * token("string", (escaped + (1-squote))^0) - * token("quote", squote) - + token("quote", bquote) + * token("quote", squote) + + token("quote", bquote) * token("string", (escaped + (1-bquote))^0) - * token("quote", bquote) + * token("quote", bquote) -local p_keywords_s = exact_match(keywords_standard,nil,true) -local p_keywords_d = exact_match(keywords_dialects,nil,true) +local p_keywords_s = exactmatch(keywords_standard,true) +local p_keywords_d = exactmatch(keywords_dialects,true) local keyword_s = token("keyword", p_keywords_s) local keyword_d = token("command", p_keywords_d) local number = token("number", float + integer) local operator = token("special", S("+-*/%^!=<>;:{}[]().&|?~")) -sqllexer._tokenstyles = context.styleset - -sqllexer._foldpattern = P("/*") + P("*/") + S("{}") -- separate entry else interference - -sqllexer._foldsymbols = { - _patterns = { - "/%*", - "%*/", - }, - ["comment"] = { - ["/*"] = 1, - ["*/"] = -1, - } +sqllexer.folding = { + ["/*"] = { ["comment"] = 1 }, + ["*/"] = { ["comment"] = -1 }, + -- ["{"] = { ["operator"] = 1 }, + -- ["}"] = { ["operator"] = -1 }, } -sqllexer._rules = { +sqllexer.rules = { { "whitespace", spacing }, { "keyword-s", keyword_s }, { "keyword-d", keyword_d }, diff --git a/context/data/scite/context/lexers/scite-context-lexer-tex-web.lua b/context/data/scite/context/lexers/scite-context-lexer-tex-web.lua index 88499a9c2..8f0e5daa8 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-tex-web.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-tex-web.lua @@ -6,18 +6,17 @@ local info = { license = "see context related readme files", } -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local texweblexer = lexer.new("tex-web","scite-context-lexer-tex") -local texlexer = lexer.load("scite-context-lexer-tex") +local texweblexer = lexers.new("tex-web","scite-context-lexer-tex") +local texlexer = lexers.load("scite-context-lexer-tex") -- can probably be done nicer now, a bit of a hack -texweblexer._rules = texlexer._rules_web -texweblexer._tokenstyles = texlexer._tokenstyles -texweblexer._foldsymbols = texlexer._foldsymbols -texweblexer._directives = texlexer._directives +texweblexer.rules = texlexer.rules_web +texweblexer.embedded = texlexer.embedded +-- texweblexer.whitespace = texlexer.whitespace +texweblexer.folding = texlexer.folding +texweblexer.directives = texlexer.directives return texweblexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-tex.lua b/context/data/scite/context/lexers/scite-context-lexer-tex.lua index 71cfce0f5..a4aa83aa0 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-tex.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-tex.lua @@ -6,55 +6,34 @@ local info = { license = "see context related readme files", } --- maybe: _LINEBYLINE variant for large files (no nesting) --- maybe: protected_macros - ---[[ - - experiment dd 2009/10/28 .. todo: - - -- figure out if tabs instead of splits are possible - -- locate an option to enter name in file dialogue (like windows permits) - -- figure out why loading a file fails - -- we cannot print to the log pane - -- we cannot access props["keywordclass.macros.context.en"] - -- lexer.get_property only handles integers - -- we cannot run a command to get the location of mult-def.lua - - -- local interface = props["keywordclass.macros.context.en"] - -- local interface = lexer.get_property("keywordclass.macros.context.en","") - -]]-- - -local global, string, table, lpeg = _G, string, table, lpeg +local string, table, lpeg = string, table, lpeg local P, R, S, V, C, Cmt, Cp, Cc, Ct = lpeg.P, lpeg.R, lpeg.S, lpeg.V, lpeg.C, lpeg.Cmt, lpeg.Cp, lpeg.Cc, lpeg.Ct local type, next = type, next local find, match, lower, upper = string.find, string.match, string.lower, string.upper -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns -local inform = context.inform +local lexers = require("scite-context-lexer") -local token = lexer.token -local exact_match = lexer.exact_match +local patterns = lexers.patterns +local token = lexers.token +local report = lexers.report -local contextlexer = lexer.new("tex","scite-context-lexer-tex") -local whitespace = contextlexer.whitespace +local contextlexer = lexers.new("tex","scite-context-lexer-tex") +local texwhitespace = contextlexer.whitespace -local cldlexer = lexer.load("scite-context-lexer-cld") -local mpslexer = lexer.load("scite-context-lexer-mps") +local cldlexer = lexers.load("scite-context-lexer-cld") +-- local cldlexer = lexers.load("scite-context-lexer-lua") +local mpslexer = lexers.load("scite-context-lexer-mps") -local commands = { en = { } } -local primitives = { } -local helpers = { } -local constants = { } +local commands = { en = { } } +local primitives = { } +local helpers = { } +local constants = { } do -- todo: only once, store in global -- commands helpers primitives - local definitions = context.loaddefinitions("scite-context-data-interfaces") + local definitions = lexers.loaddefinitions("scite-context-data-interfaces") if definitions then local used = { } @@ -86,10 +65,10 @@ do -- todo: only once, store in global end end table.sort(used) - inform("context user interfaces '%s' supported",table.concat(used," ")) + report("context user interfaces '%s' supported",table.concat(used," ")) end - local definitions = context.loaddefinitions("scite-context-data-context") + local definitions = lexers.loaddefinitions("scite-context-data-context") local overloaded = { } if definitions then @@ -103,7 +82,7 @@ do -- todo: only once, store in global end end - local definitions = context.loaddefinitions("scite-context-data-tex") + local definitions = lexers.loaddefinitions("scite-context-data-tex") if definitions then local function add(data,normal) @@ -140,87 +119,48 @@ local knowncommand = Cmt(cstoken^1, function(_,i,s) return currentcommands[s] and i end) -local utfchar = context.utfchar -local wordtoken = context.patterns.wordtoken -local iwordtoken = context.patterns.iwordtoken -local wordpattern = context.patterns.wordpattern -local iwordpattern = context.patterns.iwordpattern -local invisibles = context.patterns.invisibles -local checkedword = context.checkedword -local styleofword = context.styleofword -local setwordlist = context.setwordlist +local utfchar = lexers.helpers.utfchar +local wordtoken = lexers.patterns.wordtoken +local iwordtoken = lexers.patterns.iwordtoken +local wordpattern = lexers.patterns.wordpattern +local iwordpattern = lexers.patterns.iwordpattern +local invisibles = lexers.patterns.invisibles +local styleofword = lexers.styleofword +local setwordlist = lexers.setwordlist + local validwords = false local validminimum = 3 --- % language=uk - --- fails (empty loop message) ... latest lpeg issue? - --- todo: Make sure we only do this at the beginning .. a pitty that we --- can't store a state .. now is done too often. - -local knownpreamble = Cmt(P("% "), function(input,i,_) -- todo : utfbomb, was #P("% ") - if i < 10 then - validwords, validminimum = false, 3 - local s, e, word = find(input,"^(.-)[\n\r]",i) -- combine with match - if word then - local interface = match(word,"interface=([a-z][a-z]+)") - if interface and #interface == 2 then - inform("enabling context user interface '%s'",interface) - currentcommands = commands[interface] or commands.en or { } - end - local language = match(word,"language=([a-z][a-z]+)") +-- % language=uk (space before key is mandate) + +contextlexer.preamble = Cmt(P("% ") + P(true), function(input,i) + currentcommands = false + validwords = false + validminimum = 3 + local s, e, line = find(input,"^(.-)[\n\r]",1) -- combine with match + if line then + local interface = match(line," interface=([a-z][a-z]+)") + local language = match(line," language=([a-z][a-z]+)") + if interface and #interface == 2 then + -- report("enabling context user interface '%s'",interface) + currentcommands = commands[interface] + end + if language then validwords, validminimum = setwordlist(language) end end - return false + if not currentcommands then + currentcommands = commands.en or { } + end + return false -- so we go back and now handle the line as comment end) --- -- the token list contains { "style", endpos } entries --- -- --- -- in principle this is faster but it is also crash sensitive for large files - --- local constants_hash = { } for i=1,#constants do constants_hash [constants [i]] = true end --- local helpers_hash = { } for i=1,#helpers do helpers_hash [helpers [i]] = true end --- local primitives_hash = { } for i=1,#primitives do primitives_hash[primitives[i]] = true end - --- local specialword = Ct( P("\\") * Cmt( C(cstoken^1), function(input,i,s) --- if currentcommands[s] then --- return true, "command", i --- elseif constants_hash[s] then --- return true, "data", i --- elseif helpers_hash[s] then --- return true, "plain", i --- elseif primitives_hash[s] then --- return true, "primitive", i --- else -- if starts with if then primitive --- return true, "user", i --- end --- end) ) - --- local specialword = P("\\") * Cmt( C(cstoken^1), function(input,i,s) --- if currentcommands[s] then --- return true, { "command", i } --- elseif constants_hash[s] then --- return true, { "data", i } --- elseif helpers_hash[s] then --- return true, { "plain", i } --- elseif primitives_hash[s] then --- return true, { "primitive", i } --- else -- if starts with if then primitive --- return true, { "user", i } --- end --- end) - --- experiment: keep space with whatever ... less tables - --- 10pt - local commentline = P("%") * (1-S("\n\r"))^0 local endline = S("\n\r")^1 local space = patterns.space -- S(" \n\r\t\f\v") local any = patterns.any +local exactmatch = patterns.exactmatch local backslash = P("\\") local hspace = S(" \t") @@ -230,16 +170,16 @@ local p_rest = any local p_preamble = knownpreamble local p_comment = commentline ----- p_command = backslash * knowncommand ------ p_constant = backslash * exact_match(constants) ------ p_helper = backslash * exact_match(helpers) ------ p_primitive = backslash * exact_match(primitives) +----- p_constant = backslash * exactmatch(constants) +----- p_helper = backslash * exactmatch(helpers) +----- p_primitive = backslash * exactmatch(primitives) local p_csdone = #(1-cstoken) + P(-1) -local p_command = backslash * lexer.helpers.utfchartabletopattern(currentcommands) * p_csdone -local p_constant = backslash * lexer.helpers.utfchartabletopattern(constants) * p_csdone -local p_helper = backslash * lexer.helpers.utfchartabletopattern(helpers) * p_csdone -local p_primitive = backslash * lexer.helpers.utfchartabletopattern(primitives) * p_csdone +local p_command = backslash * lexers.helpers.utfchartabletopattern(currentcommands) * p_csdone +local p_constant = backslash * lexers.helpers.utfchartabletopattern(constants) * p_csdone +local p_helper = backslash * lexers.helpers.utfchartabletopattern(helpers) * p_csdone +local p_primitive = backslash * lexers.helpers.utfchartabletopattern(primitives) * p_csdone local p_ifprimitive = P("\\if") * cstoken^1 local p_csname = backslash * (cstoken^1 + P(1)) @@ -252,28 +192,15 @@ local p_reserved = backslash * ( P("??") + R("az") * P("!") ) * cstoken^1 -local p_number = context.patterns.real -local p_unit = P("pt") + P("bp") + P("sp") + P("mm") + P("cm") + P("cc") + P("dd") +local p_number = lexers.patterns.real +----- p_unit = P("pt") + P("bp") + P("sp") + P("mm") + P("cm") + P("cc") + P("dd") + P("dk") +local p_unit = lexers.helpers.utfchartabletopattern { "pt", "bp", "sp", "mm", "cm", "cc", "dd", "dk" } -- no looking back = #(1-S("[=")) * cstoken^3 * #(1-S("=]")) --- This one gives stack overflows: --- --- local p_word = Cmt(iwordpattern, function(_,i,s) --- if validwords then --- return checkedword(validwords,validminimum,s,i) --- else --- -- return true, { "text", i } --- return true, "text", i --- end --- end) --- --- So we use this one instead: - ------ p_word = Ct( iwordpattern / function(s) return styleofword(validwords,validminimum,s) end * Cp() ) -- the function can be inlined -local p_word = iwordpattern / function(s) return styleofword(validwords,validminimum,s) end * Cp() -- the function can be inlined +local p_word = C(iwordpattern) * Cp() / function(s,p) return styleofword(validwords,validminimum,s,p) end -- a bit of a hack ------ p_text = (1 - p_grouping - p_special - p_extra - backslash - space + hspace)^1 +----- p_text = (1 - p_grouping - p_special - p_extra - backslash - space + hspace)^1 -- keep key pressed at end-of syst-aux.mkiv: -- @@ -319,30 +246,29 @@ end local p_invisible = invisibles^1 -local spacing = token(whitespace, p_spacing ) - -local rest = token("default", p_rest ) -local preamble = token("preamble", p_preamble ) -local comment = token("comment", p_comment ) -local command = token("command", p_command ) -local constant = token("data", p_constant ) -local helper = token("plain", p_helper ) -local primitive = token("primitive", p_primitive ) -local ifprimitive = token("primitive", p_ifprimitive) -local reserved = token("reserved", p_reserved ) -local csname = token("user", p_csname ) -local grouping = token("grouping", p_grouping ) -local number = token("number", p_number ) - * token("constant", p_unit ) -local special = token("special", p_special ) -local reserved = token("reserved", p_reserved ) -- reserved internal preproc -local extra = token("extra", p_extra ) -local invisible = token("invisible", p_invisible ) -local text = token("default", p_text ) +local spacing = token(texwhitespace, p_spacing ) + +local rest = token("default", p_rest ) +local comment = token("comment", p_comment ) +local command = token("command", p_command ) +local constant = token("data", p_constant ) +local helper = token("plain", p_helper ) +local primitive = token("primitive", p_primitive ) +local ifprimitive = token("primitive", p_ifprimitive) +local reserved = token("reserved", p_reserved ) +local csname = token("user", p_csname ) +local grouping = token("grouping", p_grouping ) +local number = token("number", p_number ) + * token("constant", p_unit ) +local special = token("special", p_special ) +local reserved = token("reserved", p_reserved ) -- reserved internal preproc +local extra = token("extra", p_extra ) +local invisible = token("invisible", p_invisible ) +local text = token("default", p_text ) local word = p_word ------ startluacode = token("grouping", P("\\startluacode")) ------ stopluacode = token("grouping", P("\\stopluacode")) +----- startluacode = token("grouping", P("\\startluacode")) +----- stopluacode = token("grouping", P("\\stopluacode")) local luastatus = false local luatag = nil @@ -351,14 +277,14 @@ local lualevel = 0 local function startdisplaylua(_,i,s) luatag = s luastatus = "display" - cldlexer._directives.cld_inline = false + cldlexer.directives.cld_inline = false return true end local function stopdisplaylua(_,i,s) local ok = luatag == s if ok then - cldlexer._directives.cld_inline = false + cldlexer.directives.cld_inline = false luastatus = false end return ok @@ -369,7 +295,7 @@ local function startinlinelua(_,i,s) return false elseif not luastatus then luastatus = "inline" - cldlexer._directives.cld_inline = true + cldlexer.directives.cld_inline = true lualevel = 1 return true else-- if luastatus == "inline" then @@ -396,7 +322,7 @@ local function stopinlinelua_e(_,i,s) -- } lualevel = lualevel - 1 local ok = lualevel <= 0 -- was 0 if ok then - cldlexer._directives.cld_inline = false + cldlexer.directives.cld_inline = false luastatus = false end return ok @@ -405,7 +331,7 @@ local function stopinlinelua_e(_,i,s) -- } end end -contextlexer._reset_parser = function() +contextlexer.resetparser = function() luastatus = false luatag = nil lualevel = 0 @@ -462,17 +388,11 @@ local stopmetafuncode = token("embedded", stopmetafun) local callers = token("embedded", P("\\") * metafuncall) * metafunarguments + token("embedded", P("\\") * luacall) -lexer.embed_lexer(contextlexer, mpslexer, startmetafuncode, stopmetafuncode) -lexer.embed_lexer(contextlexer, cldlexer, startluacode, stopluacode) - --- preamble is inefficient as it probably gets called each time (so some day I really need to --- patch the plugin) +lexers.embed(contextlexer, mpslexer, startmetafuncode, stopmetafuncode) +lexers.embed(contextlexer, cldlexer, startluacode, stopluacode) -contextlexer._preamble = preamble - -contextlexer._rules = { +contextlexer.rules = { { "whitespace", spacing }, - -- { "preamble", preamble }, { "word", word }, { "text", text }, -- non words { "comment", comment }, @@ -499,13 +419,11 @@ contextlexer._rules = { -- Watch the text grabber, after all, we're talking mostly of text (beware, -- no punctuation here as it can be special). We might go for utf here. -local web = lexer.loadluafile("scite-context-lexer-web-snippets") +local web = lexers.loadluafile("scite-context-lexer-web-snippets") if web then - lexer.inform("supporting web snippets in tex lexer") - - contextlexer._rules_web = { + contextlexer.rules_web = { { "whitespace", spacing }, { "text", text }, -- non words { "comment", comment }, @@ -527,9 +445,7 @@ if web then else - lexer.report("not supporting web snippets in tex lexer") - - contextlexer._rules_web = { + contextlexer.rules_web = { { "whitespace", spacing }, { "text", text }, -- non words { "comment", comment }, @@ -550,39 +466,31 @@ else end -contextlexer._tokenstyles = context.styleset - -local environment = { - ["\\start"] = 1, ["\\stop"] = -1, - -- ["\\begin"] = 1, ["\\end" ] = -1, -} - --- local block = { --- ["\\begin"] = 1, ["\\end" ] = -1, --- } - -local group = { - ["{"] = 1, ["}"] = -1, -} - -contextlexer._foldpattern = P("\\" ) * (P("start") + P("stop")) + S("{}") -- separate entry else interference - -contextlexer._foldsymbols = { -- these need to be style references .. todo: multiple styles - _patterns = { - "\\start", "\\stop", -- regular environments - -- "\\begin", "\\end", -- (moveable) blocks - "[{}]", +contextlexer.folding = { + ["\\start"] = { + ["command"] = 1, + ["constant"] = 1, + ["data"] = 1, + ["user"] = 1, + ["embedded"] = 1, + -- ["helper"] = 1, + ["plain"] = 1, + }, + ["\\stop"] = { + ["command"] = -1, + ["constant"] = -1, + ["data"] = -1, + ["user"] = -1, + ["embedded"] = -1, + -- ["helper"] = -1, + ["plain"] = -1, + }, + ["{"] = { + ["grouping"] = 1, + }, + ["}"] = { + ["grouping"] = -1, }, - ["command"] = environment, - ["constant"] = environment, - ["data"] = environment, - ["user"] = environment, - ["embedded"] = environment, - ["helper"] = environment, - ["plain"] = environment, - ["grouping"] = group, } --- context.inspect(contextlexer) - return contextlexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-txt.lua b/context/data/scite/context/lexers/scite-context-lexer-txt.lua index 8ecfff7cb..5b48657a9 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-txt.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-txt.lua @@ -9,53 +9,45 @@ local info = { local P, S, Cmt, Cp = lpeg.P, lpeg.S, lpeg.Cmt, lpeg.Cp local find, match = string.find, string.match -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token +local patterns = lexers.patterns +local token = lexers.token +local styleofword = lexers.styleofword +local setwordlist = lexers.setwordlist -local textlexer = lexer.new("txt","scite-context-lexer-txt") -local whitespace = textlexer.whitespace +local textlexer = lexers.new("txt","scite-context-lexer-txt") +local textwhitespace = textlexer.whitespace + +local space = patterns.space +local any = patterns.any +local wordtoken = patterns.wordtoken +local wordpattern = patterns.wordpattern -local space = patterns.space -local any = patterns.any -local wordtoken = patterns.wordtoken -local wordpattern = patterns.wordpattern -local checkedword = context.checkedword -local styleofword = context.styleofword -local setwordlist = context.setwordlist local validwords = false local validminimum = 3 --- local styleset = context.newstyleset { --- "default", --- "text", "okay", "error", "warning", --- "preamble", --- } - --- [#!-%] language=uk - -local p_preamble = Cmt((S("#!-%") * P(" ")), function(input,i,_) -- todo: utf bomb no longer # - if i == 1 then -- < 10 then - validwords, validminimum = false, 3 - local s, e, line = find(input,"^[#!%-%%](.+)[\n\r]",i) - if line then - local language = match(line,"language=([a-z]+)") - if language then - validwords, validminimum = setwordlist(language) - end +-- [#!-%] language=uk (space before key is mandate) + +local p_preamble = Cmt((S("#!-%") * P(" ") + P(true)), function(input,i) + validwords = false + validminimum = 3 + local s, e, line = find(input,"^[#!%-%%](.+)[\n\r]",1) + if line then + local language = match(line," language=([a-z]+)") + if language then + validwords, validminimum = setwordlist(language) end end - return false + return false -- so we go back and now handle the line as text end) local t_preamble = token("preamble", p_preamble) local t_word = - wordpattern / function(s) return styleofword(validwords,validminimum,s) end * Cp() -- the function can be inlined + C(wordpattern) * Cp() / function(s,p) return styleofword(validwords,validminimum,s,p) end -- a bit of a hack local t_text = token("default", wordtoken^1) @@ -64,9 +56,9 @@ local t_rest = token("default", (1-wordtoken-space)^1) local t_spacing = - token(whitespace, space^1) + token(textwhitespace, space^1) -textlexer._rules = { +textlexer.rules = { { "whitespace", t_spacing }, { "preamble", t_preamble }, { "word", t_word }, -- words >= 3 @@ -74,7 +66,4 @@ textlexer._rules = { { "rest", t_rest }, } -textlexer._LEXBYLINE = true -- new (needs testing, not yet as the system changed in 3.24) -textlexer._tokenstyles = context.styleset - return textlexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-web-snippets.lua b/context/data/scite/context/lexers/scite-context-lexer-web-snippets.lua index 5121030cc..2ef661e2e 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-web-snippets.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-web-snippets.lua @@ -8,11 +8,10 @@ local info = { local P, R, S, C, Cg, Cb, Cs, Cmt, lpegmatch = lpeg.P, lpeg.R, lpeg.S, lpeg.C, lpeg.Cg, lpeg.Cb, lpeg.Cs, lpeg.Cmt, lpeg.match -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token +local patterns = lexers.patterns +local token = lexers.token local websnippets = { } @@ -25,9 +24,9 @@ local squote = P("'") local dquote = P('"') local period = P(".") -local t_whitespace = token(whitespace, space^1) -local t_spacing = token("default", space^1) -local t_rest = token("default", any) +local t_whitespace = token("whitespace", space^1) +local t_spacing = token("default", space^1) +local t_rest = token("default", any) -- the web subset diff --git a/context/data/scite/context/lexers/scite-context-lexer-web.lua b/context/data/scite/context/lexers/scite-context-lexer-web.lua index 81a6f90df..6325e3693 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-web.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-web.lua @@ -8,25 +8,23 @@ local info = { local P, R, S = lpeg.P, lpeg.R, lpeg.S -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token -local exact_match = lexer.exact_match +local patterns = lexers.patterns +local token = lexers.token -local weblexer = lexer.new("web","scite-context-lexer-web") -local whitespace = weblexer.whitespace +local weblexer = lexers.new("web","scite-context-lexer-web") +local webwhitespace = weblexer.whitespace local space = patterns.space -- S(" \n\r\t\f\v") local any = patterns.any local restofline = patterns.restofline -local startofline = patterns.startofline +local eol = patterns.eol local period = P(".") local percent = P("%") -local spacing = token(whitespace, space^1) +local spacing = token(webwhitespace, space^1) local rest = token("default", any) local eop = P("@>") @@ -35,33 +33,54 @@ local eos = eop * P("+")^-1 * P("=") -- we can put some of the next in the web-snippets file -- is f okay here? -local texcomment = token("comment", percent * restofline^0) +-- This one is hard to handle partial because trailing spaces are part of the tex part as well +-- as the c part so they are bound to that. We could have some special sync signal like a label +-- with space-like properties (more checking then) or styles that act as boundary (basically any +-- style + 128 or so). A sunday afternoon challenge. Maybe some newline trickery? Or tag lines +-- which is possible in scite. Or how about a function hook: foolexer.backtracker(str) where str +-- matches at the beginning of a line: foolexer.backtracker("@ @c") or a pattern, maybe even a +-- match from start. -local texpart = token("label",P("@")) * #spacing +-- local backtracker = ((lpeg.Cp() * lpeg.P("@ @c")) / function(p) n = p end + lpeg.P(1))^1 +-- local c = os.clock() print(#s) print(lpeg.match(backtracker,s)) print(n) print(c) + +-- local backtracker = (lpeg.Cmt(lpeg.P("@ @c"),function(_,p) n = p end) + lpeg.P(1))^1 +-- local c = os.clock() print(#s) print(lpeg.match(backtracker,s)) print(n) print(c) + +----- somespace = spacing +----- somespace = token("whitespace",space^1) +local somespace = space^1 + +local texpart = token("label",P("@")) * #somespace + token("label",P("@") * P("*")^1) * token("function",(1-period)^1) * token("label",period) -local midpart = token("label",P("@d")) * #spacing - + token("label",P("@f")) * #spacing -local cpppart = token("label",P("@c")) * #spacing - + token("label",P("@p")) * #spacing +local midpart = token("label",P("@d")) * #somespace + + token("label",P("@f")) * #somespace +local cpppart = token("label",P("@c")) * #somespace + + token("label",P("@p")) * #somespace + token("label",P("@") * S("<(")) * token("function",(1-eop)^1) * token("label",eos) local anypart = P("@") * ( P("*")^1 + S("dfcp") + space^1 + S("<(") * (1-eop)^1 * eos ) local limbo = 1 - anypart - percent -local texlexer = lexer.load("scite-context-lexer-tex-web") -local cpplexer = lexer.load("scite-context-lexer-cpp-web") +weblexer.backtracker = eol^1 * P("@ @c") +-- weblexer.foretracker = (space-eol)^0 * eol^1 * P("@") * space + anypart +weblexer.foretracker = anypart -lexer.embed_lexer(weblexer, texlexer, texpart + limbo, #anypart) -lexer.embed_lexer(weblexer, cpplexer, cpppart + midpart, #anypart) +local texlexer = lexers.load("scite-context-lexer-tex-web") +local cpplexer = lexers.load("scite-context-lexer-cpp-web") -local texcomment = token("comment", percent * restofline^0) +-- local texlexer = lexers.load("scite-context-lexer-tex") +-- local cpplexer = lexers.load("scite-context-lexer-cpp") -weblexer._rules = { +lexers.embed(weblexer, texlexer, texpart + limbo, #anypart) +lexers.embed(weblexer, cpplexer, cpppart + midpart, #anypart) + +local texcomment = token("comment", percent * restofline^0) + +weblexer.rules = { { "whitespace", spacing }, { "texcomment", texcomment }, -- else issues with first tex section { "rest", rest }, } -weblexer._tokenstyles = context.styleset - return weblexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-xml-cdata.lua b/context/data/scite/context/lexers/scite-context-lexer-xml-cdata.lua index f5ca86cb2..af0570f68 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-xml-cdata.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-xml-cdata.lua @@ -8,26 +8,22 @@ local info = { local P = lpeg.P -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token +local patterns = lexers.patterns +local token = lexers.token -local xmlcdatalexer = lexer.new("xml-cdata","scite-context-lexer-xml-cdata") -local whitespace = xmlcdatalexer.whitespace +local xmlcdatalexer = lexers.new("xml-cdata","scite-context-lexer-xml-cdata") local space = patterns.space local nospace = 1 - space - P("]]>") -local t_spaces = token(whitespace, space ^1) -local t_cdata = token("comment", nospace^1) +local t_spaces = token("whitespace", space^1) +local t_cdata = token("comment", nospace^1) -xmlcdatalexer._rules = { +xmlcdatalexer.rules = { { "whitespace", t_spaces }, { "cdata", t_cdata }, } -xmlcdatalexer._tokenstyles = context.styleset - return xmlcdatalexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-xml-comment.lua b/context/data/scite/context/lexers/scite-context-lexer-xml-comment.lua index 40de8f603..8b14d8295 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-xml-comment.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-xml-comment.lua @@ -8,26 +8,22 @@ local info = { local P = lpeg.P -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token +local patterns = lexers.patterns +local token = lexers.token -local xmlcommentlexer = lexer.new("xml-comment","scite-context-lexer-xml-comment") -local whitespace = xmlcommentlexer.whitespace +local xmlcommentlexer = lexers.new("xml-comment","scite-context-lexer-xml-comment") local space = patterns.space local nospace = 1 - space - P("-->") -local t_spaces = token(whitespace, space ^1) -local t_comment = token("comment", nospace^1) +local t_spaces = token("whitespace", space^1) +local t_comment = token("comment", nospace^1) -xmlcommentlexer._rules = { +xmlcommentlexer.rules = { { "whitespace", t_spaces }, { "comment", t_comment }, } -xmlcommentlexer._tokenstyles = context.styleset - return xmlcommentlexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-xml-script.lua b/context/data/scite/context/lexers/scite-context-lexer-xml-script.lua index a1b717a6a..38788f5fe 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-xml-script.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-xml-script.lua @@ -8,26 +8,22 @@ local info = { local P = lpeg.P -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token +local patterns = lexers.patterns +local token = lexers.token -local xmlscriptlexer = lexer.new("xml-script","scite-context-lexer-xml-script") -local whitespace = xmlscriptlexer.whitespace +local xmlscriptlexer = lexers.new("xml-script","scite-context-lexer-xml-script") local space = patterns.space local nospace = 1 - space - (P("") -local t_spaces = token(whitespace, space ^1) -local t_script = token("default", nospace^1) +local t_spaces = token("whitespace", space^1) +local t_script = token("default", nospace^1) -xmlscriptlexer._rules = { +xmlscriptlexer.rules = { { "whitespace", t_spaces }, { "script", t_script }, } -xmlscriptlexer._tokenstyles = context.styleset - return xmlscriptlexer diff --git a/context/data/scite/context/lexers/scite-context-lexer-xml.lua b/context/data/scite/context/lexers/scite-context-lexer-xml.lua index bbdb3febc..e635d4019 100644 --- a/context/data/scite/context/lexers/scite-context-lexer-xml.lua +++ b/context/data/scite/context/lexers/scite-context-lexer-xml.lua @@ -17,20 +17,19 @@ local P, R, S, C, Cmt, Cp = lpeg.P, lpeg.R, lpeg.S, lpeg.C, lpeg.Cmt, lpeg.Cp local type = type local match, find = string.match, string.find -local lexer = require("scite-context-lexer") -local context = lexer.context -local patterns = context.patterns +local lexers = require("scite-context-lexer") -local token = lexer.token -local exact_match = lexer.exact_match +local patterns = lexers.patterns +local token = lexers.token -local xmllexer = lexer.new("xml","scite-context-lexer-xml") -local whitespace = xmllexer.whitespace +local xmllexer = lexers.new("xml","scite-context-lexer-xml") +local xmlwhitespace = xmllexer.whitespace + +local xmlcommentlexer = lexers.load("scite-context-lexer-xml-comment") +local xmlcdatalexer = lexers.load("scite-context-lexer-xml-cdata") +local xmlscriptlexer = lexers.load("scite-context-lexer-xml-script") +local lualexer = lexers.load("scite-context-lexer-lua") -local xmlcommentlexer = lexer.load("scite-context-lexer-xml-comment") -local xmlcdatalexer = lexer.load("scite-context-lexer-xml-cdata") -local xmlscriptlexer = lexer.load("scite-context-lexer-xml-script") -local lualexer = lexer.load("scite-context-lexer-lua") local space = patterns.space local any = patterns.any @@ -70,15 +69,14 @@ local closelua = "?>" local entity = ampersand * (1-semicolon)^1 * semicolon -local utfchar = context.utfchar -local wordtoken = context.patterns.wordtoken -local iwordtoken = context.patterns.iwordtoken -local wordpattern = context.patterns.wordpattern -local iwordpattern = context.patterns.iwordpattern -local invisibles = context.patterns.invisibles -local checkedword = context.checkedword -local styleofword = context.styleofword -local setwordlist = context.setwordlist +local utfchar = lexers.helpers.utfchar +local wordtoken = patterns.wordtoken +local iwordtoken = patterns.iwordtoken +local wordpattern = patterns.wordpattern +local iwordpattern = patterns.iwordpattern +local invisibles = patterns.invisibles +local styleofword = lexers.styleofword +local setwordlist = lexers.setwordlist local validwords = false local validminimum = 3 @@ -86,23 +84,18 @@ local validminimum = 3 -- -- -local t_preamble = Cmt(P("]*%?>%s*<%?context%-directive%s+editor%s+language%s+(..)%s+%?>") - -- if not language then - -- language = match(input,"^<%?xml[^>]*language=[\"\'](..)[\"\'][^>]*%?>",i) - -- end - if language then - validwords, validminimum = setwordlist(language) - end +xmllexer.preamble = Cmt(P("]*%?>%s*<%?context%-directive%s+editor%s+language%s+(..)%s+%?>") + if language then + validwords, validminimum = setwordlist(language) end - return false + return false -- so we go back and now handle the line as processing instruction end) local t_word = --- Ct( iwordpattern / function(s) return styleofword(validwords,validminimum,s) end * Cp() ) -- the function can be inlined - iwordpattern / function(s) return styleofword(validwords,validminimum,s) end * Cp() -- the function can be inlined + C(iwordpattern) * Cp() / function(s,p) return styleofword(validwords,validminimum,s,p) end -- a bit of a hack local t_rest = token("default", any) @@ -111,7 +104,7 @@ local t_text = token("default", (1-S("<>&")-space)^1) local t_spacing = - token(whitespace, space^1) + token(xmlwhitespace, space^1) local t_optionalwhitespace = token("default", space^1)^0 @@ -227,10 +220,10 @@ local t_doctype = token("command",P("")) -lexer.embed_lexer(xmllexer, lualexer, token("command", openlua), token("command", closelua)) -lexer.embed_lexer(xmllexer, xmlcommentlexer, token("command", opencomment), token("command", closecomment)) -lexer.embed_lexer(xmllexer, xmlcdatalexer, token("command", opencdata), token("command", closecdata)) -lexer.embed_lexer(xmllexer, xmlscriptlexer, token("command", openscript), token("command", closescript)) +lexers.embed(xmllexer, lualexer, token("command", openlua), token("command", closelua)) +lexers.embed(xmllexer, xmlcommentlexer, token("command", opencomment), token("command", closecomment)) +lexers.embed(xmllexer, xmlcdatalexer, token("command", opencdata), token("command", closecdata)) +lexers.embed(xmllexer, xmlscriptlexer, token("command", openscript), token("command", closescript)) -- local t_name = -- token("plain",name) @@ -303,12 +296,8 @@ local t_instruction = local t_invisible = token("invisible",invisibles^1) --- local t_preamble = --- token("preamble", t_preamble ) - -xmllexer._rules = { +xmllexer.rules = { { "whitespace", t_spacing }, - { "preamble", t_preamble }, { "word", t_word }, -- { "text", t_text }, -- { "comment", t_comment }, @@ -322,29 +311,15 @@ xmllexer._rules = { { "rest", t_rest }, } -xmllexer._tokenstyles = context.styleset - -xmllexer._foldpattern = P("") -- separate entry else interference -+ P("") - -xmllexer._foldsymbols = { - _patterns = { - "", - "<", - }, - ["keyword"] = { - [""] = -1, - ["<"] = 1, - }, - ["command"] = { - [""] = -1, - [""] = -1, - ["<"] = 1, - }, +xmllexer.folding = { + [""] = { ["keyword"] = -1 }, + ["<"] = { ["keyword"] = 1 }, + [""] = { ["command"] = -1 }, + ["-->"] = { ["command"] = -1 }, + [">"] = { ["command"] = -1 }, } return xmllexer diff --git a/context/data/scite/context/lexers/scite-context-lexer.lua b/context/data/scite/context/lexers/scite-context-lexer.lua index 289697b72..fd74ba55a 100644 --- a/context/data/scite/context/lexers/scite-context-lexer.lua +++ b/context/data/scite/context/lexers/scite-context-lexer.lua @@ -8,441 +8,128 @@ local info = { } --- We need a copy of this file to lexer.lua in the same path. This was not needed --- before version 10 but I can't figure out what else to do. It looks like there --- is some loading of lexer.lua but I can't see where. - --- For a while it looked like we're stuck with scite 3 because there would be no --- update of scintillua for the newer versions (c++ changes) but now it looks that --- there will be updates (2021). There is a dll for scite >= 5 but it doesn't --- work (yet). In version 5.20+ the scintillua dll makes scite crash (alsl when I --- use the recommended import). In an early 5.02 loading the (shipped) lpeg lexer --- does nothing at all. There have been changes in the lua interface too but I need --- to compare the old and new lib. For now I gave up and got back to version 3+. It --- would be nice if error messages would go to the log pane so that wget an idea --- what happens. After all the code involved (below) is not that much and not that --- complex either. +-- There is some history behind these lexers. When LPEG came around, we immediately adopted that in CONTEXT +-- and one of the first things to show up were the verbatim plugins. There we have several models: line based +-- and syntax based. The way we visualize the syntax for TEX, METAPOST and LUA relates closely to the way the +-- CONTEXT user interface evolved. We have LPEG all over the place. -- --- Actually, scite 5.22 also crashed when a program was launched so better wait --- for a while. (In the worst case, when it all stops working, we need to migrate --- to visual code, which is out backup/fallback plan.) I didn't test if the latest --- textadept still works with our lexer variant. In the meantime that editor has --- grown to some 30 MB so it is no longer a lightweight option (scite with scintilla --- is still quite small). - -if lpeg.setmaxstack then lpeg.setmaxstack(1000) end - -local log = false -local trace = false -local detail = false -local show = false -- nice for tracing (also for later) -local collapse = false -- can save some 15% (maybe easier on scintilla) -local inspect = false -- can save some 15% (maybe easier on scintilla) - --- local log = true --- local trace = true - --- GET GOING --- --- You need to copy this file over lexer.lua. In principle other lexers could work --- too but not now. Maybe some day. All patterns will move into the patterns name --- space. I might do the same with styles. If you run an older version of SciTE you --- can take one of the archives. Pre 3.41 versions can just be copied to the right --- path, as there we still use part of the normal lexer. Below we mention some --- issues with different versions of SciTE. We try to keep up with changes but best --- check careful if the version that yuou install works as expected because SciTE --- and the scintillua dll need to be in sync. --- --- REMARK --- --- We started using lpeg lexing as soon as it came available. Because we had rather --- demanding files and also wanted to use nested lexers, we ended up with our own --- variant. At least at that time this was more robust and also much faster (as we --- have some pretty large Lua data files and also work with large xml files). As a --- consequence successive versions had to be adapted to changes in the (at that time --- still unstable) api. In addition to lexing we also have spell checking and such. --- Around version 3.60 things became more stable so I don't expect to change much. --- --- LEXING --- --- When pc's showed up we wrote our own editor (texedit) in MODULA 2. It was fast, --- had multiple overlapping (text) windows, could run in the at most 1M memory at --- that time, etc. The realtime file browsing with lexing that we had at that time --- is still on my current wish list. The color scheme and logic that we used related --- to the logic behind the ConTeXt user interface that evolved. --- --- Later I rewrote the editor in perl/tk. I don't like the perl syntax but tk --- widgets are very powerful and hard to beat. In fact, TextAdept reminds me of --- that: wrap your own interface around a framework (tk had an edit control that one --- could control completely not that different from scintilla). Last time I checked --- it still ran fine so I might try to implement something like its file handling in --- TextAdept. --- --- In the end I settled for SciTE for which I wrote TeX and MetaPost lexers that --- could handle keyword sets. With respect to lexing (syntax highlighting) ConTeXt --- has a long history, if only because we need it for manuals. Anyway, in the end we --- arrived at lpeg based lexing (which is quite natural as we have lots of lpeg --- usage in ConTeXt). The basic color schemes haven't changed much. The most --- prominent differences are the nested lexers. --- --- In the meantime I made the lexer suitable for typesetting sources which was no --- big deal as we already had that in place (ConTeXt used lpeg from the day it --- showed up so we have several lexing options there too). --- --- Keep in mind that in ConTeXt (typesetting) lexing can follow several approaches: --- line based (which is handy for verbatim mode), syntax mode (which is nice for --- tutorials), and tolerant mode (so that one can also show bad examples or errors). --- These demands can clash. --- --- HISTORY --- --- The remarks below are more for myself so that I keep track of changes in the --- way we adapt to the changes in the scintillua and scite. --- --- The fold and lex functions are copied and patched from original code by Mitchell --- (see lexer.lua) in the scintillua distribution. So whatever I say below, assume --- that all errors are mine. The ability to use lpeg in scintilla is a real nice --- addition and a brilliant move. The code is a byproduct of the (mainly Lua based) --- TextAdept which at the time I ran into it was a rapidly moving target so I --- decided to stick ot SciTE. When I played with it, it had no realtime output pane --- although that seems to be dealt with now (2017). I need to have a look at it in --- more detail but a first test again made the output hang and it was a bit slow too --- (and I also want the log pane as SciTE has it, on the right, in view). So, for --- now I stick to SciTE even when it's somewhat crippled by the fact that we cannot --- hook our own (language dependent) lexer into the output pane (somehow the --- errorlist lexer is hard coded into the editor). Hopefully that will change some --- day. The ConTeXt distribution has cmd runner for textdept that will plug in the --- lexers discussed here as well as a dedicated runner. Considere it an experiment. --- --- The basic code hasn't changed much but we had to adapt a few times to changes in --- the api and/or work around bugs. Starting with SciTE version 3.20 there was an --- issue with coloring. We still lacked a connection with SciTE itself (properties --- as well as printing to the log pane) and we could not trace this (on windows). --- However on unix we can see messages! As far as I can see, there are no --- fundamental changes in lexer.lua or LexLPeg.cxx so it must be/have been in --- Scintilla itself. So we went back to 3.10. Indicators of issues are: no lexing of --- 'next' and 'goto