About Lectures Research Software Blog
Musical Site

Moods Blog

Dojo Shin Kaï

RSS Feed
Thank you!

XHTML 1.0 conformant
CSS 2.0 conformant
Didier Verna's scientific blog: Lisp, Emacs, LaTeX and random stuff.

Wednesday, November 28 2007

FiXme version 3.3 is out

I'm happy to announce the next edition of FiXme: version 3.3

New in this release:
* Document incompatibility between marginal layout and the ACM SIG classes
* Honor twoside option in marginal layout
* Support KOMA-Script classes version 2006/07/30 v2.95b
* Documentation improvements
* Fix incompatibility with AMS-Art
* Fix bug in \fixme@footnotetrue

FiXme provides you with a way of inserting fixme notes in documents. Such notes can appear in the margin of the document, as index entries, in the log file and as warnings on stdout. It is also possible to summarize them in a list, and in the index. When you switch from draft to final mode, any remaining fixme note will be logged, but removed from the document's body. Additionally, critical notes will abort compilation with an informative message. FiXme also comes with support for AUC-TeX.

Tuesday, November 27 2007

CurVe 1.14 is released

I'm happy to announce the next edition of CurVe: version 1.14.

CurVe is a Curriculum Vitae class for LaTeX2e. This version adds support for Polish, and an option to reverse-count bibliographic entries.

Enjoy !

Wednesday, November 14 2007

FiNK 2.1 is released

I'm happy to announce the next edition of FiNK, the LaTeX2e File Name Keeper, version 2.1.

This package looks over your shoulder and keeps track of files \input'ed
(the LaTeX way) or \include'ed in your document. You then have a
permanent access to the directory, name and extension of the file
currently being processed through several macros. FiNK also comes with
support for AUC-TeX.

This version fixes a bug preventing proper expansion in math mode.

Sunday, January 22 2006

Generating PostScript and PDF from TeX

Some time ago, I was thinking about the generation of PostScript and/or PDF from TeX documents (I will speak indifferently of TeX and LaTeX). Knowing that several options are available, I was wondering which solution people preferred. This question triggered a thread on comp.text.tex from which I relate some interesting excerpts here. In order to clarify the debate, I have tweaked or modified several of the quotations. This is a personal manipulation which does not involve the original authors. For that reason, I don't associate them directly to the text below. Warning: the first person comments below are not all mine!

A last note: some arguments about the quality of the available visualization tools appeared in the thread. I have excluded them from the debate, since the central question was the quality of the rendering, not the ergonomy of the tools that handle them.

Participants (besides myself): LEE Sau Dan, George N. White III, David Kastrup, Mike Oliver, H.S. (??). Thanks to them for their comments.


Direct approach:

TeX -> (tex)    -> DVI -> (dvips) -> PostScript
TeX -> (pdftex) -> PDF

Indirect approaches:

TeX -> (tex) -> DVI -> (dvips) -> PostScript -> (ps2pdf) -> PDF
TeX -> (pdftex) -> PDF -> (pdf2ps) -> PostScript

And note that it is also possible to generate PDF from the DVI file...

Direct or indirect approach ?

pdftex does not necessarily generate the same layout as tex. pdftex allows more flexibility in adjusting the character spacing, etc, and hence may break lines differently than Knuth's tex. It doesn't occur that often, though.

pdftex can produce visually more even margins (by allowing some glyphs to protrude), which in turn allows you to use slightly narrower gutters in multi-column layouts. Not only does this save trees, it also gives effectively longer lines and so reduces the number of bad breaks, rivers, etc. This is especially helpful if you are trying to use a CM-based font in a layout originally intended for Times-Roman.

One has to remember that if you want to use the direct approach, you won't be able to use target-specific additions in your source file, or will need different versions of parts of it (perhaps in conditionals) according to the target language. For instance, it is impossible to use pstricks with pdftex because pstricks is PostScript-specific (but see pdftricks...).

If your required packages vary according to the target language (e.g. you want hyperref for PDF output, but not for PostScript), you will most certainly have problems compiling your document in a single directory tree. That's because the aux files will vary according to your target language. So either you make clean before changing your target, or you compile (outside of the source tree) in different subtrees. This can be somewhat cumbersome, although a simple use of Makefiles and of the TEXINPUTS environment variable makes this process quite easy.

This only real disadvantage that remains is that you have to
compile your document entirely twice (once for
each target language), so it takes more time than with one of
the indirect approaches.

So bear in mind that a direct approach might give slightly different documents.

Cons the PDF to PS conversion

PostScript Level 3 supports PDF with minimal translation. Older printers with Level 1 interpreters often choke on PS files created from PDF, and there are sometimes problems with Level 2 printers. In some circles PDF has a bad reputation based on bugs in early software and problems rendering PDF using old rasterizers. When a PDF file is translated to PS, the driver generally just loads PS code to define the PDF primitives. With current rasterizers this PS code is fairly simple, but with older rasterizers the code is considerably more complex and almost sure to give problems under stress.

The following arguments come from people programming PostScript directly, which is not supported with pdftex:
EPS -> PDF conversion means a loss of the PostScript elegance. Compact, repetitive code gets expanded, and hence file size gets inflated.

This is about using Postscript source translated to PDF, and the converting the PDF document to PostScript.
Compact Postscript code (such as fractals) will be expanded in this final Postscript file, thanks to PDF's Turing-incompleteness. This means an inflated final file size.

But if you need PDF output, you can't cope with its Turing-incompleteness, right?

However, some people note that:
The lack of support of literal Postscript code and EPS figures (yes, I know epstopdf) is irritating. I'm switching most of my drawings, etc. to METAPOST for its elegance, and it's good news to learn that pdftex can include METAPOST figures directly (as long as I don't insert literal Postscript with the 'special' command in METAPOST).

Pro TeX -> DVI -> PS / PDF

EPS or EEPIC are not supported by pdftex. METAPOST is supported though.

However, unless you have a tightly controlled source of EPS figures, the conversion from EPS to PDF is a tricky step, and can require tweaks (and even bug fixes to the conversion tool) to deal with the idiosyncracies in individual files. This is much easier to get right and to debug if you convert each EPS to PDF separately than if you have problems with a document level conversion.

So this might eventually turn into an argument in favor of the direct approach.

Pro TeX -> PDF -> PS (throwing DVI away)

But note that you can produce DVI with pdftex: use the command \pdfoutput=0

TeX has information that gets discarded in the DVI file but which can be used by pdftex. Information available to TeX macros can be put into \specials for dvips, but pdftex can also get information from TeX's internals.

Some people object:
You still haven't specified which particular \specials are causing problems. I have been using the hyperref package for some time. With this package, I can insert document infos such as author, title, etc (displayed in Acrobat Reader when you pop up the Document Info window (Ctrl-D in some versions)). The dvips driver of hyperref will insert appropriate pdfmark operators so that ps2pdf can generate it in the final PDF file. When you use pdftex instead (thus using the pdftex driver of hyperref), the macros are defined in such a way that the same info is generated on the output PDF file directly. In either case, the document info are there in the final PDF. The same is true for hyperlinks, crossreference likes, PDF form entry fields, etc. Also thumbnails and bookmarks.

PDF -> PS conversions are needed by many more people that use TeX, while conversions involving DVI files are only useful to a limited audience. There are more and better tools for PDF -> PS than for DVI -> anything. As a case in point, the most common tool for DVI -> PS is dvips, which is based on a raster graphics model and so can have problems (even when using scalable outline fonts) if the PS file is scaled.

dvips lays out the page using a raster grid determined by the resolution you specify. Sure, -Ppdf sets a high resolution, but if you need to scale a PS file created with dvips this causes problems. Y&Y's (commercial) dvips one does produce scalable PS.

About the quality of the tools, some people object:
Tools for DVI -> PS conversion are very good, stable and versatile. (e.g. the embedded T1 fonts contain only the glpyhs actually used in the document.)

This is an emotional and ironic argument that might be considered as not so relevant:
If all the programs with 'dvi' in their names stopped working, a few mathematicians would be annoyed but would soon learn to use PDF. If all the programs that work with 'pdf' files stopped working, CNN would cover the disaster 7/24. If we all stop using dvi files, a big whack of TeX code can be discarded and the people who have been maintaining programs with 'dvi' in the names can get back to solving more important problems.

Pro direct PostScript

I believe there are more tools that rely on Postscript technology than PDF. pstricks, EPS diagrams, etc. come to mind. (Yes, epstopdf is helpful. But how about pstricks? I sometimes do \special{"{some Postscript code}"} for some special effects that wouldn't be achieved easily otherwise.) Until pdftex can support Postscript specials, many users would stay with DVI+EPS. But that would be a big project.


PDF files tend to have more predictable rendering times than PS files, so typesetter operators avoid PS files that aren't created by well-known applications (Photoshop, Illustrator) which produce flat PS code similar to PDF.


My personal conclusion (everybody can make his own): PDF is bound to be used on a wider scale than PostScript. A direct PDF rendering seems to be of better quality than the PostScript equivalent. Given its features, PDF is more comfortable to use on-line.

The main argument against pdftex is the impossibility to use PostScript code (and others) in the source (however, METAPOSTmight be a good alternative for figures). As soon as one is not limited by these constraints, and a fortiori if the use of PostScript is limited to printing, the TeX -> PDF -> PS solution seems to be a good choice.

page 2 of 2 -

French Flag English Flag
Copyright (C) 2008 -- 2018 Didier Verna didier@lrde.epita.fr