Generating PostScript and PDF from TeX
By Didier Verna on Sunday, January 22 2006, 12:45 - LaTeX - Permalink
Some time ago, I was thinking about the generation of PostScript and/or PDF from TeX documents (I will speak indifferently of TeX and LaTeX). Knowing that several options are available, I was wondering which solution people preferred. This question triggered a thread on
A last note: some arguments about the quality of the available visualization tools appeared in the thread. I have excluded them from the debate, since the central question was the quality of the rendering, not the ergonomy of the tools that handle them.
Participants (besides myself): LEE Sau Dan, George N. White III, David Kastrup, Mike Oliver, H.S. (??). Thanks to them for their comments.
And note that it is also possible to generate PDF from the DVI file...
One has to remember that if you want to use the direct approach, you won't be able to use target-specific additions in your source file, or will need different versions of parts of it (perhaps in conditionals) according to the target language. For instance, it is impossible to use pstricks with pdftex because pstricks is PostScript-specific (but see pdftricks...).
If your required packages vary according to the target language (e.g. you want hyperref for PDF output, but not for PostScript), you will most certainly have problems compiling your document in a single directory tree. That's because the aux files will vary according to your target language. So either you
This only real disadvantage that remains is that you have to
compile your document entirely twice (once for
each target language), so it takes more time than with one of
the indirect approaches.
So bear in mind that a direct approach might give slightly different documents.
PostScript Level 3 supports PDF with minimal translation. Older printers with Level 1 interpreters often choke on PS files created from PDF, and there are sometimes problems with Level 2 printers. In some circles PDF has a bad reputation based on bugs in early software and problems rendering PDF using old rasterizers. When a PDF file is translated to PS, the driver generally just loads PS code to define the PDF primitives. With current rasterizers this PS code is fairly simple, but with older rasterizers the code is considerably more complex and almost sure to give problems under stress.
The following arguments come from people programming PostScript directly, which is not supported with
EPS -> PDF conversion means a loss of the PostScript elegance. Compact, repetitive code gets expanded, and hence file size gets inflated.
This is about using Postscript source translated to PDF, and the converting the PDF document to PostScript.
Compact Postscript code (such as fractals) will be expanded in this final Postscript file, thanks to PDF's Turing-incompleteness. This means an inflated final file size.
But if you need PDF output, you can't cope with its Turing-incompleteness, right?
However, some people note that:
The lack of support of literal Postscript code and EPS figures (yes, I know epstopdf) is irritating. I'm switching most of my drawings, etc. to METAPOST for its elegance, and it's good news to learn that
EPS or EEPIC are not supported by
However, unless you have a tightly controlled source of EPS figures, the conversion from EPS to PDF is a tricky step, and can require tweaks (and even bug fixes to the conversion tool) to deal with the idiosyncracies in individual files. This is much easier to get right and to debug if you convert each EPS to PDF separately than if you have problems with a document level conversion.
So this might eventually turn into an argument in favor of the direct approach.
But note that you can produce DVI with
TeX has information that gets discarded in the DVI file but which can be used by
Some people object:
You still haven't specified which particular \specials are causing problems. I have been using the hyperref package for some time. With this package, I can insert document infos such as author, title, etc (displayed in Acrobat Reader when you pop up the Document Info window (Ctrl-D in some versions)). The
PDF -> PS conversions are needed by many more people that use TeX, while conversions involving DVI files are only useful to a limited audience. There are more and better tools for PDF -> PS than for DVI -> anything. As a case in point, the most common tool for DVI -> PS is
About the quality of the tools, some people object:
Tools for DVI -> PS conversion are very good, stable and versatile. (e.g. the embedded T1 fonts contain only the glpyhs actually used in the document.)
This is an emotional and ironic argument that might be considered as not so relevant:
If all the programs with 'dvi' in their names stopped working, a few mathematicians would be annoyed but would soon learn to use PDF. If all the programs that work with 'pdf' files stopped working, CNN would cover the disaster 7/24. If we all stop using dvi files, a big whack of TeX code can be discarded and the people who have been maintaining programs with 'dvi' in the names can get back to solving more important problems.
I believe there are more tools that rely on Postscript technology than PDF. pstricks, EPS diagrams, etc. come to mind. (Yes, epstopdf is helpful. But how about pstricks? I sometimes do \special{"{some Postscript code}"} for some special effects that wouldn't be achieved easily otherwise.) Until
PDF files tend to have more predictable rendering times than PS files, so typesetter operators avoid PS files that aren't created by well-known applications (Photoshop, Illustrator) which produce flat PS code similar to PDF.
My personal conclusion (everybody can make his own): PDF is bound to be used on a wider scale than PostScript. A direct PDF rendering seems to be of better quality than the PostScript equivalent. Given its features, PDF is more comfortable to use on-line.
The main argument against
comp.text.tex
from which I relate some interesting excerpts here. In order to clarify the debate, I have tweaked or modified several of the quotations. This is a personal manipulation which does not involve the original authors. For that reason, I don't associate them directly to the text below. Warning: the first person comments below are not all mine!A last note: some arguments about the quality of the available visualization tools appeared in the thread. I have excluded them from the debate, since the central question was the quality of the rendering, not the ergonomy of the tools that handle them.
Participants (besides myself): LEE Sau Dan, George N. White III, David Kastrup, Mike Oliver, H.S. (??). Thanks to them for their comments.
Options
Direct approach:
TeX -> (tex) -> DVI -> (dvips) -> PostScript
TeX -> (pdftex) -> PDF
Indirect approaches:
TeX -> (tex) -> DVI -> (dvips) -> PostScript -> (ps2pdf) -> PDF
or:
TeX -> (pdftex) -> PDF -> (pdf2ps) -> PostScript
And note that it is also possible to generate PDF from the DVI file...
Direct or indirect approach ?
pdftex
does not necessarily generate the same layout as tex
. pdftex
allows more flexibility in adjusting the character spacing, etc, and hence may break lines differently than Knuth's tex
. It doesn't occur that often, though.pdftex
can produce visually more even margins (by allowing some glyphs to protrude), which in turn allows you to use slightly narrower gutters in multi-column layouts. Not only does this save trees, it also gives effectively longer lines and so reduces the number of bad breaks, rivers, etc. This is especially helpful if you are trying to use a CM-based font in a layout originally intended for Times-Roman.One has to remember that if you want to use the direct approach, you won't be able to use target-specific additions in your source file, or will need different versions of parts of it (perhaps in conditionals) according to the target language. For instance, it is impossible to use pstricks with pdftex because pstricks is PostScript-specific (but see pdftricks...).
If your required packages vary according to the target language (e.g. you want hyperref for PDF output, but not for PostScript), you will most certainly have problems compiling your document in a single directory tree. That's because the aux files will vary according to your target language. So either you
make clean
before changing your target, or you compile (outside of the source tree) in different subtrees. This can be somewhat cumbersome, although a simple use of Makefiles and of the TEXINPUTS environment variable makes this process quite easy.This only real disadvantage that remains is that you have to
compile your document entirely twice (once for
each target language), so it takes more time than with one of
the indirect approaches.
So bear in mind that a direct approach might give slightly different documents.
Cons the PDF to PS conversion
PostScript Level 3 supports PDF with minimal translation. Older printers with Level 1 interpreters often choke on PS files created from PDF, and there are sometimes problems with Level 2 printers. In some circles PDF has a bad reputation based on bugs in early software and problems rendering PDF using old rasterizers. When a PDF file is translated to PS, the driver generally just loads PS code to define the PDF primitives. With current rasterizers this PS code is fairly simple, but with older rasterizers the code is considerably more complex and almost sure to give problems under stress.
The following arguments come from people programming PostScript directly, which is not supported with
pdftex
:EPS -> PDF conversion means a loss of the PostScript elegance. Compact, repetitive code gets expanded, and hence file size gets inflated.
This is about using Postscript source translated to PDF, and the converting the PDF document to PostScript.
Compact Postscript code (such as fractals) will be expanded in this final Postscript file, thanks to PDF's Turing-incompleteness. This means an inflated final file size.
But if you need PDF output, you can't cope with its Turing-incompleteness, right?
However, some people note that:
The lack of support of literal Postscript code and EPS figures (yes, I know epstopdf) is irritating. I'm switching most of my drawings, etc. to METAPOST for its elegance, and it's good news to learn that
pdftex
can include METAPOST figures directly (as long as I don't insert literal Postscript with the 'special' command in METAPOST).Pro TeX -> DVI -> PS / PDF
EPS or EEPIC are not supported by
pdftex
. METAPOST is supported though.However, unless you have a tightly controlled source of EPS figures, the conversion from EPS to PDF is a tricky step, and can require tweaks (and even bug fixes to the conversion tool) to deal with the idiosyncracies in individual files. This is much easier to get right and to debug if you convert each EPS to PDF separately than if you have problems with a document level conversion.
So this might eventually turn into an argument in favor of the direct approach.
Pro TeX -> PDF -> PS (throwing DVI away)
But note that you can produce DVI with
pdftex
: use the command \pdfoutput=0
TeX has information that gets discarded in the DVI file but which can be used by
pdftex
. Information available to TeX macros can be put into \specials for dvips
, but pdftex
can also get information from TeX's internals.Some people object:
You still haven't specified which particular \specials are causing problems. I have been using the hyperref package for some time. With this package, I can insert document infos such as author, title, etc (displayed in Acrobat Reader when you pop up the Document Info window (Ctrl-D in some versions)). The
dvips
driver of hyperref will insert appropriate pdfmark operators so that ps2pdf can generate it in the final PDF file. When you use pdftex
instead (thus using the pdftex
driver of hyperref), the macros are defined in such a way that the same info is generated on the output PDF file directly. In either case, the document info are there in the final PDF. The same is true for hyperlinks, crossreference likes, PDF form entry fields, etc. Also thumbnails and bookmarks.PDF -> PS conversions are needed by many more people that use TeX, while conversions involving DVI files are only useful to a limited audience. There are more and better tools for PDF -> PS than for DVI -> anything. As a case in point, the most common tool for DVI -> PS is
dvips
, which is based on a raster graphics model and so can have problems (even when using scalable outline fonts) if the PS file is scaled.dvips
lays out the page using a raster grid determined by the resolution you specify. Sure, -Ppdf sets a high resolution, but if you need to scale a PS file created with dvips
this causes problems. Y&Y's (commercial) dvips
one does produce scalable PS.About the quality of the tools, some people object:
Tools for DVI -> PS conversion are very good, stable and versatile. (e.g. the embedded T1 fonts contain only the glpyhs actually used in the document.)
This is an emotional and ironic argument that might be considered as not so relevant:
If all the programs with 'dvi' in their names stopped working, a few mathematicians would be annoyed but would soon learn to use PDF. If all the programs that work with 'pdf' files stopped working, CNN would cover the disaster 7/24. If we all stop using dvi files, a big whack of TeX code can be discarded and the people who have been maintaining programs with 'dvi' in the names can get back to solving more important problems.
Pro direct PostScript
I believe there are more tools that rely on Postscript technology than PDF. pstricks, EPS diagrams, etc. come to mind. (Yes, epstopdf is helpful. But how about pstricks? I sometimes do \special{"{some Postscript code}"} for some special effects that wouldn't be achieved easily otherwise.) Until
pdftex
can support Postscript specials, many users would stay with DVI+EPS. But that would be a big project.Unclassified
PDF files tend to have more predictable rendering times than PS files, so typesetter operators avoid PS files that aren't created by well-known applications (Photoshop, Illustrator) which produce flat PS code similar to PDF.
Conclusion
My personal conclusion (everybody can make his own): PDF is bound to be used on a wider scale than PostScript. A direct PDF rendering seems to be of better quality than the PostScript equivalent. Given its features, PDF is more comfortable to use on-line.
The main argument against
pdftex
is the impossibility to use PostScript code (and others) in the source (however, METAPOSTmight be a good alternative for figures). As soon as one is not limited by these constraints, and a fortiori if the use of PostScript is limited to printing, the TeX -> PDF -> PS solution seems to be a good choice.
Comments
My relatives every time say that I am killing my time here at
net, except I know I am getting experience every day by reading such good content.
Howdy exceptional website! Does running a blog like this
require a great deal of work? I have absolutely no expertise in coding however I had been hoping to start my own blog soon. Anyways, if you have
any ideas or tips for new blog owners please share.
I know this is off topic but I just had to ask.
Cheers!
By the turn on the 20th century, amateur advisors and publications
were increasingly challenging the monopoly the large retail companies had on decor.
English feminist author Mary Haweis wrote a number of widely
read essays inside 1880s by which she derided the eagerness in which
aspiring middle-class people furnished their houses based on the rigid
models accessible to them through the retailers.[10] She advocated
anyone adoption of any particular style, tailor-made
to the person needs and preferences from the customer:
"One of my strongest convictions, and one from the first canons of proper taste, is our houses, just like the fish’s shell plus the bird’s nest, must represent our individual taste and habits.
The move toward decoration being a separate artistic profession, unrelated to your manufacturers and retailers, received an impetus together with the 1899 formation in the Institute of British Decorators; with John Dibblee Crace since it's president, it represented almost 200 decorators across the country.[11] By 1915, the London Directory listed 127 individuals trading as interior decorators, which often 10 were women. Rhoda and Agnes Garrett were the very first women to practice professionally as decorators in 1874. The importance of their develop design was regarded right at that moment as on the par your of William Morris. In 1876, their work – Suggestions for House Decoration in Painting, Woodwork and Furniture – spread their applying for grants artistic interior planning to a wide middle-class audience.[12]
The construction industry features a poor reputation with regard to cost control, and tales
of budget overspends are legion. This is often related to deficiencies in a and one of the professionals
who work inside it.
Whoever is usually to blame after the day, the issue can often be traced that the initial budget was unrealistic at
the start. To set an inadequate budget, watch it overrun, after which
look for a scapegoat could possibly have become common practice
but it really achieves nothing.
It is therefore completely vital that a sensible cost plan prepare yourself and agreed with the earliest possible
stage. Obviously, a holder will prepare rough budgets when thinking about
basic project viability, but when he is able he should obtain advice from his selected team.
Good professionals will never simply accept the rough budget already prepared.
They will give objective advice, for example the level of confidence
from the estimate with an assessment on the likely outcomes of
changes to your specification, so that the realistic view might be taken with
the owner. It is better to recognize the condition immediately, and
replace the scope or specification to fit, rather than to be forced into late changes or perhaps omissions which decrease the effectiveness in the
finished project.