Didier Verna's scientific blog: Lisp, Emacs, LaTeX and random stuff.

## LaTeX

Monday, January 28 2013

## FiXme 4.2 is out

I'm pleased to announce that, after more than two years, I've managed to put up a very small release of FiXme (my collaborative annotations tool for LaTeX2e) in which I didn't even author the two included changes...

Keep the faith. FiXme is still alive !

New in this veresion (4.2):

** Improve Danish translation
** Fix buglet in redefinition of \@wrindex
reported by Norman Gray.


Get it at the usual place.

Wednesday, March 21 2012

## Star TeX, the Next Generation

I'm happy to announce that my contribution to TUG 2012, the next TeX Users Group International conference, has been accepted. Please find the title and abstract below.

Star TeX, the Next Generation

In 2010, I asked Donald Knuth why he chose to design and implement TeX as a macro-expansion system (as opposed to more traditional procedure calls). His answer was that:

1. he wanted something relatively simple for his secretary who was not a computer scientist,
2. the very limited computing resources at that time practically mandated the use of something much lighter than a true programming language.

The first part of the answer left me with a slight feeling of skepticism. It remains to be seen that TeX is simple to use, and when or where it is, its underlying implementation has hardly anything to do with it.

The second part of the answer, on the other hand, was both very convincing and arguably now obsolete as well. Time has passed and the situation today is very different from what it was 50 years ago. The available computing power has grown exponentially, and so has our overall skills in language design and implementation.

Several ideas on how to modernize TeX already exist. Some have been actually implemented. In this talk, I will present mine. Interestingly enough, it seems to me that modernizing TeX can start with grounding it in an old yet very modern programming language: Common Lisp. I will present the key features that make this language particularly well suited to the task, emphasizing on points such as extensibility, scriptability and multi-paradigm programming. The presentation will include reflections about the software engineering aspects (internals), as well as about the surface layer of TeX itself. Most notably, I will explore the possibilities of providing a more consistent syntax to the TeX API, while maintaining backward compatibility with the existing code base.

Tuesday, July 19 2011

## LaTeX Coding Standards

EDIT: the paper is now freely available for non TUG members.

I'm happy to announce that my contribution to TUG 2011, the next TeX Users Group International conference, has been accepted. Please find the title and abstract below.

Towards LaTeX Coding Standards

Because LaTeX (and ultimately TeX) is only a macro-expansion system, the language does not impose any kind of good software engineering practice, program structure or coding style whatsoever. As a consequence, writing beautiful code (for some definition of "beautiful") requires a lot of self-discipline from the programmer.

Maybe because in the LaTeX world, collaboration is not so widespread (most packages are single-authored), the idea of some LaTeX Coding Standards is not so pressing as with other programming languages. Some people may, and probably have developed their own programming habits, but when it comes to the LaTeX world as a whole, the situation is close to anarchy.

Over the years, the permanent flow of personal development experiences contributed to shape my own taste in terms of coding style. The issues involved are numerous and their spectrum is very large: they range from simple code layout (formatting, indentation, naming schemes etc.), mid-level concerns such as modularity and encapsulation, to very high-level concerns like package interaction/conflict management and even some rules for proper social behavior.

In this talk, I will report on all these experiences and describe what I think are good (or at least better) programming practices. I believe that such practices do help in terms of code readability, maintainability and extensibility, all key factors in software evolution. They help me, perhaps they will help you too.

Thursday, December 16 2010

## DoX 2.2 is released

Hello, I'm happy to announce the release of DoX v2.2. DoX is a set of extensions to the Doc package, for LaTeX2e class and style authors.

New in this release: the ability to create new control-sequence based documentation items (for instance LaTeX lengths).

Tuesday, December 14 2010

## CurVe 1.16 is out

Hello,

I'm happy to announce the release of CurVe 1.16. CurVe is a CV class for LaTeX2e.

New in this release:

- An examples directory
- New \text macro to insert plain text in the middle of rubrics,
- Support for openbib option which was implicit before
- Fix incompatibilities with the splitbib package
- Handle the bibentry/hyperref incompatibility directly
- Implement old font commands letting packages using them (e.g. fancyhdr) work correctly

Friday, December 3 2010

## FiNK 2.2 is out

Hello,

I'm happy to announce the release of FiNK 2.2. FiNK is the LaTeX2e File Name Keeper. New in this release: FiNK is now compatible with the memoir class.

Grab it here

Wednesday, December 1 2010

## Nice feedback on my TUG 2010 paper

Here's a nice comment from a reader of the TUGBoat on my TUG 2010 paper entitled "Classes, Styles, Conflicts: the Biological Realm of LaTeX":

I really enjoy Didier Verna's paper (pp. 162-172). His analogies between LaTeX and microbiology is truly exciting! Being neither a TeXnician nor a (micro) biologist, the paper gives me more insight about LaTeX while at the same time giving me a glimpse to a world beyond my narrow field of knowledge. Please do extend my compliments to the author.

Tuesday, October 5 2010

## Classes, Styles, Conflicts: the Biological Realm of LaTeX

I'm pleased to announce that my article entitled "Classes, Styles, Conflicts: the Biological Realm of LaTeX" has been published in the TUGboat journal, Volume 32 N.2.

There is also a live video recording of the presentation. See http://www.lrde.epita.fr/~didier/resear ... rna.10.tug

Tuesday, March 9 2010

## Paper accepted at TUG 2010

Hello,

I'm happy to announce that I will be presenting a paper at TUG 2010, in San Francisco, for the 2^5th birthday of TeX. The abstract is given below:

Classes, Styles, Conflicts: the Biological Realm of LaTeX

Every LaTeX user faces the "compatibility nightmare" one day or another. With so much intercession capabilities at hand (LaTeX code being able to redefine itself at will), a time comes inevitably when the compilation of a document fails, due to a class/style conflict. In an ideal world, class/style conflicts should only be a concern for package maintainers, not end-users of LaTeX. Unfortunately, the world is real, not ideal, and end-user document compilation does break.

As both a class/style maintainer and a document author, I tried several times to come up with some general principles or a systematic approach to handling class/style cross-compatibility in a smooth and gentle manner, but I ultimately failed. Instead, one Monday morning, I woke up with this vision of the LaTeX biotope, an emergent phenomenon whose global behavior cannot be comprehended, because it is in fact the result of a myriad of "macro"-interactions between small entities, themselves in perpetual evolution.

In this presentation, I would like to draw bridges between LaTeX and biology, by viewing documents, classes and styles as living beings constantly mutating their geneTeX code in order to survive \renewcommand attacks...

Monday, September 21 2009

## FiXme 4.0 is out !

I'm happy to announce FiXme version 4.0

#### WARNING: this is a major release containing many new features and heavy
#### internals refactoring. FiXme 4.0 comes with unprecedented flexibiity,
#### unrivalled extensibility and unchallenged backward-INcompatibility.

What's new in version 4.0
=========================
* Support for collaborative annotations
suggested by Michael Kubovy
** Support for "targeted" notes and environments
(highlighting a portion of text), suggested by Mark Edgington.
** Support for "floating" notes
(not specific to any portion of text), suggested by Rasmus Villemoes.
** Support for alternate layout autoswitch in TeX's inner mode
suggested by Will Robertson.
** Support for automatic language tracking in multilingual documents
** Support for themes
** Extended support for user-provided layouts
** Support for key=value argument syntax in the whole user interface
** New command \fxsetup
** Homogenize log and console messages
** Heavy internals refactoring

Description
===========
FiXme is a collaborative annotation tool for LaTeX documents. Annotating a
document refers here to inserting meta-notes, that is, notes that do not
belong to the document itself, but rather to its development or reviewing
process. Such notes may involve things of different importance levels, ranging
from simple "fix the spelling" flags to critical "this paragraph is a lie"
mentions. Annotations like this should be visible during the development or
reviewing phase, but should normally disapear in the final version of the
document.

FiXme is designed to ease and automate the process of managing collaborative
annotations, by offering a set of predefined note levels and layouts, the
possibility to register multiple note authors, to reference annotations by
listing and indexing etc. FiXme is extensible, giving you the possibility to
create new layouts or even complete "themes", and also comes with support for
AUC-TeX.

FiXme homepage: http://www.lrde.epita.fr/~didier/softwa ... .php#fixme

## DoX v2.0 (2009/09/21) is out

I'm happy to announce the release of DoX v2.0 (2009/09/21).

New in this version:
* Optional argument to \doxitem idxtype option to change the item's index type

* Optional argument to \Describe<Item> and the <Item> environment
noprint option to avoid marginal printing
noindex option to avoid item indexing

* Extend \DescribeMacro, \DescribeEnv and their corresponding environments with the same features

The doc package provides LaTeX developers with means to describe the usage and the definition of new commands and environments. However, there is no simple way to extend this functionality to other items (options or counters for instance). DoX is designed to circumvent this limitation, and provides some improvements over the existing functionality as well.

Monday, September 14 2009

## DoX version 1.0 (2009/09/11) is now available

I'm happy to annouce the first public version of the DoX package for LaTeX2e.

The doc package provides LaTeX developers with means to describe the usage and the definition of new macros and environments. However, there is no simple way to extend this functionality to other items (options or counters for instance). The dox package is designed to circumvent this limitation.

Wednesday, July 22 2009

## FiXme 3.4 is out

I'm happy to announce the next edition of FiXme: version 3.4

New in this release:
** \fixme, \fxerror, \fxwarning and \fxnote are now robust
** Fix incompatibility with KOMA-Script classes when the lox file is inexistent

FiXme provides you with a way of inserting fixme notes in documents. Such notes can appear in the margin of the document, as index entries, in the log file and as warnings on stdout. It is also possible to summarize them in a list, and in the index. When you switch from draft to final mode, any remaining fixme note will be logged, but removed from the document's body. Additionally, critical notes will abort compilation with an informative message. FiXme also comes with support for AUC-TeX.

Wednesday, June 4 2008

## Beamer blocks and the Listings package

For many of my lectures, I use the Listings package for typesetting code excerpts, and include them in Beamer blocks. Providing nice shortcuts for doing that is not trivial if you want to preserve control over Listings options, and add a new one for the block's title. Here is a way to nicely wrap a call to \lstinputlisting inside a Beamer block.

First, let's use the xkeyval package to create a "title" option:
\define@cmdkey[dvl]{lst}[@dvl@lst@]{title}{}

Next, a low-level listing input command. This macro takes 4 arguments: an overlay specification, a title for the block, a list of options passed to Listings, and a file name for input:
%% \dvlinputlisting{overlay}{title}{lstoption=,...}{file}\newcommand\dvlinputlisting[4]{%  \begin{block}#1{#2}    %% #### WARNING: I need this hack because keyval-style options    %% mess up the parsing.    \expandafter\lstinputlisting\expandafter[#3]{#4}  \end{block}}

And now, you can define all sorts of specialized versions for different languages. For example, here is one for Common Lisp code. The block title is "Lisp" by default, and a "lisp" extension is automatically added to the file name:
%% Language-specific shortcuts:%% The title option is used for the beamer block's title.%% All other options are passed to listings.%% \XXXinputlisting<overlay>[title=,lstoption=,...]{file}\newcommand<>\clinputlisting[2][]{%  \def\@dvl@lst@title{Lisp}%  \setkeys*[dvl]{lst}{#1}%  \edef\@dvl@lst@options{language=lisp,\XKV@rm}%  \dvlinputlisting{#3}{\@dvl@lst@title}{\@dvl@lst@options}{#2.lisp}}

Which you could call like this:
\clinputlisting<2->[title={Example 1}, gobble=2]{ex1}

As you can see, "title" is an option for the Beamer block, and all the others are dispatched to Listings. Cool.

Now, things are getting more complicated when you want nice shortcuts for inline environments, because nesting Beamer blocks with listings doesn't work. Fortunately, I figured out a trick based on the Verbatim package to simulate that. The idea is to store the contents of the listing environment in a temporary file, and use \lstinputlisting as before to include it. Clever right ?
:-)
Here is a generic environment for doing that. In the opening, we read the environment's contents and store it in the file \jobname.dvl. In the ending, we call our previous macro \dvlinputlisting on that file (actually, on a dynamically created argument list called \@dvl@args:
\usepackage{verbatim}\newwrite\lstvrb@out\def\@dvllisting{%  \begingroup  \@bsphack  \immediate\openout\lstvrb@out\jobname.dvl  \let\do\@makeother\dospecials\catcode\^^M\active  \def\verbatim@processline{%    \immediate\write\lstvrb@out{\the\verbatim@line}}%  \verbatim@start}\def\@enddvllisting{%  \immediate\closeout\lstvrb@out  \@esphack  \endgroup  \expandafter\dvlinputlisting\@dvl@args}

And now, we can define all sorts of specialized versions for every language we're insterested in. Again, here is one for Common Lisp.
\newenvironment<>{cllisting}[1][]{%  \def\@dvl@lst@title{Lisp}%  \setkeys*[dvl]{lst}{#1}%  \edef\@dvl@lst@options{language=lisp,\XKV@rm}%  \xdef\@dvl@args{{#2}{\@dvl@lst@title}{\@dvl@lst@options}{%    \jobname.dvl}}  \@dvllisting}{%  \@enddvllisting}

Which you can use like this:
\begin{cllinsting}<2->[title={Example 1},gobble=2]  (defun foo (x) (* 2 x))\end{cllisting}

Don't forget that frames containing code excerpts like this are fragile!

Wednesday, February 27 2008

## FiNK 2.1.1 is released

I'm happy to announce the release of FiNK 2.1.1. This is a bugfix/documentation only release.

FiNK is a LaTeX2e package that keeps track of the files included (\input or \include) in your documents.

What's new in this version:
** Fix trailing whitespace in \fink@restore

Monday, February 25 2008

## CurVe 1.15 is out

I'm happy to announce the next edition of CurVe, a LaTeX2e class for writing curricula vitae.

What's new in this version:
** Support for itemize environments, suggested by Mirko Hessel-von Molo.
** Added some documentation about vertical spacing problems in |bbl| files, suggested by Seweryn Habdank-Wojewódzki.

Wednesday, November 28 2007

## FiXme version 3.3 is out

I'm happy to announce the next edition of FiXme: version 3.3

New in this release:
* Document incompatibility between marginal layout and the ACM SIG classes
* Honor twoside option in marginal layout
* Support KOMA-Script classes version 2006/07/30 v2.95b
* Documentation improvements
* Fix incompatibility with AMS-Art
* Fix bug in \fixme@footnotetrue

FiXme provides you with a way of inserting fixme notes in documents. Such notes can appear in the margin of the document, as index entries, in the log file and as warnings on stdout. It is also possible to summarize them in a list, and in the index. When you switch from draft to final mode, any remaining fixme note will be logged, but removed from the document's body. Additionally, critical notes will abort compilation with an informative message. FiXme also comes with support for AUC-TeX.

Tuesday, November 27 2007

## CurVe 1.14 is released

I'm happy to announce the next edition of CurVe: version 1.14.

CurVe is a Curriculum Vitae class for LaTeX2e. This version adds support for Polish, and an option to reverse-count bibliographic entries.

Enjoy !

Wednesday, November 14 2007

## FiNK 2.1 is released

I'm happy to announce the next edition of FiNK, the LaTeX2e File Name Keeper, version 2.1.

This package looks over your shoulder and keeps track of files \input'ed
(the LaTeX way) or \include'ed in your document. You then have a
currently being processed through several macros. FiNK also comes with
support for AUC-TeX.

This version fixes a bug preventing proper expansion in math mode.

Sunday, January 22 2006

## Generating PostScript and PDF from TeX

Some time ago, I was thinking about the generation of PostScript and/or PDF from TeX documents (I will speak indifferently of TeX and LaTeX). Knowing that several options are available, I was wondering which solution people preferred. This question triggered a thread on comp.text.tex from which I relate some interesting excerpts here. In order to clarify the debate, I have tweaked or modified several of the quotations. This is a personal manipulation which does not involve the original authors. For that reason, I don't associate them directly to the text below. Warning: the first person comments below are not all mine!

A last note: some arguments about the quality of the available visualization tools appeared in the thread. I have excluded them from the debate, since the central question was the quality of the rendering, not the ergonomy of the tools that handle them.

Participants (besides myself): LEE Sau Dan, George N. White III, David Kastrup, Mike Oliver, H.S. (??). Thanks to them for their comments.

## Options

### Direct approach:

TeX -> (tex)    -> DVI -> (dvips) -> PostScriptTeX -> (pdftex) -> PDF

### Indirect approaches:

TeX -> (tex) -> DVI -> (dvips) -> PostScript -> (ps2pdf) -> PDFor:TeX -> (pdftex) -> PDF -> (pdf2ps) -> PostScript

And note that it is also possible to generate PDF from the DVI file...

## Direct or indirect approach ?

pdftex does not necessarily generate the same layout as tex. pdftex allows more flexibility in adjusting the character spacing, etc, and hence may break lines differently than Knuth's tex. It doesn't occur that often, though.

pdftex can produce visually more even margins (by allowing some glyphs to protrude), which in turn allows you to use slightly narrower gutters in multi-column layouts. Not only does this save trees, it also gives effectively longer lines and so reduces the number of bad breaks, rivers, etc. This is especially helpful if you are trying to use a CM-based font in a layout originally intended for Times-Roman.

One has to remember that if you want to use the direct approach, you won't be able to use target-specific additions in your source file, or will need different versions of parts of it (perhaps in conditionals) according to the target language. For instance, it is impossible to use pstricks with pdftex because pstricks is PostScript-specific (but see pdftricks...).

If your required packages vary according to the target language (e.g. you want hyperref for PDF output, but not for PostScript), you will most certainly have problems compiling your document in a single directory tree. That's because the aux files will vary according to your target language. So either you make clean before changing your target, or you compile (outside of the source tree) in different subtrees. This can be somewhat cumbersome, although a simple use of Makefiles and of the TEXINPUTS environment variable makes this process quite easy.

This only real disadvantage that remains is that you have to
compile your document entirely twice (once for
each target language), so it takes more time than with one of
the indirect approaches.

So bear in mind that a direct approach might give slightly different documents.

## Cons the PDF to PS conversion

PostScript Level 3 supports PDF with minimal translation. Older printers with Level 1 interpreters often choke on PS files created from PDF, and there are sometimes problems with Level 2 printers. In some circles PDF has a bad reputation based on bugs in early software and problems rendering PDF using old rasterizers. When a PDF file is translated to PS, the driver generally just loads PS code to define the PDF primitives. With current rasterizers this PS code is fairly simple, but with older rasterizers the code is considerably more complex and almost sure to give problems under stress.

The following arguments come from people programming PostScript directly, which is not supported with pdftex:
EPS -> PDF conversion means a loss of the PostScript elegance. Compact, repetitive code gets expanded, and hence file size gets inflated.

This is about using Postscript source translated to PDF, and the converting the PDF document to PostScript.
Compact Postscript code (such as fractals) will be expanded in this final Postscript file, thanks to PDF's Turing-incompleteness. This means an inflated final file size.

But if you need PDF output, you can't cope with its Turing-incompleteness, right?

However, some people note that:
The lack of support of literal Postscript code and EPS figures (yes, I know epstopdf) is irritating. I'm switching most of my drawings, etc. to METAPOST for its elegance, and it's good news to learn that pdftex can include METAPOST figures directly (as long as I don't insert literal Postscript with the 'special' command in METAPOST).

## Pro TeX -> DVI -> PS / PDF

EPS or EEPIC are not supported by pdftex. METAPOST is supported though.

However, unless you have a tightly controlled source of EPS figures, the conversion from EPS to PDF is a tricky step, and can require tweaks (and even bug fixes to the conversion tool) to deal with the idiosyncracies in individual files. This is much easier to get right and to debug if you convert each EPS to PDF separately than if you have problems with a document level conversion.

So this might eventually turn into an argument in favor of the direct approach.

## Pro TeX -> PDF -> PS (throwing DVI away)

But note that you can produce DVI with pdftex: use the command \pdfoutput=0

TeX has information that gets discarded in the DVI file but which can be used by pdftex. Information available to TeX macros can be put into \specials for dvips, but pdftex can also get information from TeX's internals.

Some people object:
You still haven't specified which particular \specials are causing problems. I have been using the hyperref package for some time. With this package, I can insert document infos such as author, title, etc (displayed in Acrobat Reader when you pop up the Document Info window (Ctrl-D in some versions)). The dvips driver of hyperref will insert appropriate pdfmark operators so that ps2pdf can generate it in the final PDF file. When you use pdftex instead (thus using the pdftex driver of hyperref), the macros are defined in such a way that the same info is generated on the output PDF file directly. In either case, the document info are there in the final PDF. The same is true for hyperlinks, crossreference likes, PDF form entry fields, etc. Also thumbnails and bookmarks.

PDF -> PS conversions are needed by many more people that use TeX, while conversions involving DVI files are only useful to a limited audience. There are more and better tools for PDF -> PS than for DVI -> anything. As a case in point, the most common tool for DVI -> PS is dvips, which is based on a raster graphics model and so can have problems (even when using scalable outline fonts) if the PS file is scaled.

dvips lays out the page using a raster grid determined by the resolution you specify. Sure, -Ppdf sets a high resolution, but if you need to scale a PS file created with dvips this causes problems. Y&Y's (commercial) dvips one does produce scalable PS.

About the quality of the tools, some people object:
Tools for DVI -> PS conversion are very good, stable and versatile. (e.g. the embedded T1 fonts contain only the glpyhs actually used in the document.)

This is an emotional and ironic argument that might be considered as not so relevant:
If all the programs with 'dvi' in their names stopped working, a few mathematicians would be annoyed but would soon learn to use PDF. If all the programs that work with 'pdf' files stopped working, CNN would cover the disaster 7/24. If we all stop using dvi files, a big whack of TeX code can be discarded and the people who have been maintaining programs with 'dvi' in the names can get back to solving more important problems.

## Pro direct PostScript

I believe there are more tools that rely on Postscript technology than PDF. pstricks, EPS diagrams, etc. come to mind. (Yes, epstopdf is helpful. But how about pstricks? I sometimes do \special{"{some Postscript code}"} for some special effects that wouldn't be achieved easily otherwise.) Until pdftex can support Postscript specials, many users would stay with DVI+EPS. But that would be a big project.

## Unclassified

PDF files tend to have more predictable rendering times than PS files, so typesetter operators avoid PS files that aren't created by well-known applications (Photoshop, Illustrator) which produce flat PS code similar to PDF.

## Conclusion

My personal conclusion (everybody can make his own): PDF is bound to be used on a wider scale than PostScript. A direct PDF rendering seems to be of better quality than the PostScript equivalent. Given its features, PDF is more comfortable to use on-line.

The main argument against pdftex` is the impossibility to use PostScript code (and others) in the source (however, METAPOSTmight be a good alternative for figures). As soon as one is not limited by these constraints, and a fortiori if the use of PostScript is limited to printing, the TeX -> PDF -> PS solution seems to be a good choice.
Copyright (C) 2008 -- 2013 Didier Verna didier@lrde.epita.fr