LaTeX Alive
A breqn for punishment

As an aside before I start work on belatedly fixing some of unicode-math's more pressing shortcomings, I've also started work tidying up the breqn package:

The breqn package was the brainchild of Michael J. Downes (of amsmath fame), and later updated by Morten Høgholm as part of his honours thesis. The package is explained in Michael’s 1997 TUGboat article ‘Breaking Equations’, and Morten’s thesis is available online if you know where to look (I don’t know how long it will be available there).

Since I have so much trouble keeping up with LaTeX work as it is, why would I take another package under my belt? In short, partly because breqn vastly alters the implementation of LaTeX mathematics, so I need to eventually know how to hook into it to properly support unicode-math. But also partly because of its heritage; such a promising piece of work shouldn’t be neglected—I used breqn in my own PhD thesis, and while I could have done without it, once you start with breqn it is hard to go back to amsmath.

If you’re interested, I need help generating test files; once they’re sufficiently comprehensive I’ll begin the process of translating the code into LaTeX3. Hopefully I can fix a bug or two along the way.

New fontspec release

Major releases of fontspec don’t happen too often:

v2.1   2010 / 09
v2.2   2011 / 09
v2.2b  2012 / 05 ("TeX Live 2012")
v2.3   2013 / 02
v2.4   2014 / 06

The new version on its way to CTAN now is the largest in a long time; I’m happy to report that a couple of long-standing and significant bugs have been squashed. (Of course, I may well have spawned a few more…)

This release happened because of a one bug in particular, involving the poor (or rather non-existent) interaction between SizeFeatures and the shape options such as ItalicFeatures. I’d known about it for years and years, gnawing away in the background, and I knew it wasn’t exactly a trivial fix—my code had simply not been written cleanly with these two interacting features in mind.

It shouldn’t be a secret that I haven’t had much time for LaTeX programming over the last few years now, and fixing up fontspec, the package that got me into this whole mess in the first place, turned out to be a good way to re-engage somewhat with my old flame.

Turns out after reading through the code again, summarising what was going on, and heavily re-factoring, the bug I was trying to fix turned out to be fairly easy to resolve. All it needed was some time and attention. But while looking through my old code and old bugs, I managed to squeeze in a few new features at the same time.

One particular change that I’ve made that I want to discuss is the position of the optional arguments. For the last decade, fontspec has asked its users to write

  Ligatures = TeX ,
  Extension = .otf ,
  UprightFont = *-regular ,

to select a font. First of all, Ligatures=TeX is now default when using either \setmainfont or \setsansfont. This is something I’ve wanted for a long time.

Secondly, it’s always seemed awkward to me to have the mandatory argument hanging on the end of \setmainfont like that—somewhat forgotten, and strangely unbalanced (the most important detail is left ‘til last).

I realised that with xparse, it was trivial to add a new interface to fontspec:

  Ligatures = TeX ,
  Extension = .otf ,
  UprightFont = *-regular ,

I’m worried this may cause some minor issues if people have ever written something like:

\foo [uh oh]

but until people contact me about this particular problem, I’m going to assume people haven’t been writing this—after all, this is what \newfontfamily is for. I can always revert the change if there is any pain, and for me the new interface is far, far more palatable.

I’m not going to spend any more time discussing new features; you can read about them below (transcribed from the readme). Please let me know via the Issue Tracker if you come across any problems or have further suggestions to make.

I’m aware this doesn’t resolve everything on my todo list, but I have some pressing bugs in unicode-math to attend to next.

Release notes for v2.4

  • Significant change to the user interface: instead of \setmainfont[features]{font}, you now write \setmainfont{font}[features]. Backwards compatibility is of course preserved.

    The reason for this change is to improve the visual comprehension of the font loading syntax with large numbers of font features.

  • Defaults for symbolic font families like this can now be specified with



  • New PunctuationSpace=WordSpace and PunctuationSpace=TwiceWordSpace settings, intended for monospaced fonts; these force the space after a period to be exactly one or two spaces wide, respectively, which preserves character alignment across lines.

  • The features above now allow changes to the default settings:

    • Ligatures=TeX is enabled by default with \setmainfont and \setsansfont.
    • WordSpace={1,0,0} and PunctuationSpace=WordSpace are now enabled by default for \setmonofont to produce better monospaced results.
    • (These can be adjusted by created your own fontspec.cfg file.)
  • SizeFeatures can now be nested inside ItalicFeatures (etc.) and behaves correctly. This has been a very long overdue bug!

  • New feature NFSSFamily=ABC to set the NFSS family of a font to “ABC”. Useful when other packages use the \fontfamily{ABC}\selectfont interface.

  • New feature FontFace = {series}{shape}{font} allows a font face to be loaded with a specific NFSS font series and font shape. A more verbose syntax allows arbitrary font features as well (and this also plays nicely with SizeFeatures):

          FontFace = {b}{ui}{Font = myfont-bui.otf, <features>} ,

    The code above, for example, will allow a bold upright italic font to be selected using the standard NFSS interface:

  • \defaultfontfeatures+ (note the +) can now be used to append to the default font feature set.

  • Setting the SmallCapsFont using the *-replacement notation has been improved/fixed.

Auto-resize images that overfill a page

LaTeX’s support for graphics through the graphicx package is reliable and works well, but occasionally I find its options somewhat limiting. One thing that often causes me to stop and immediately recompile a document I’m working on happens when an image I’ve inserted is poorly sized and takes up far more space than it should:

In these sorts of situations I’ll immediately chuck in a quick [width=\textwidth] and merrily continue writing my document (or sometimes [height=0.7\textheight] in a beamer document, where vertical space is often compromised instead).

I’ve often thought of writing some code that wraps around \includegraphics that would check whether the width or height of an included graphic was too large and shrink down the image if and only if necessary. Note that while it is possible to specify


this has the side-effect of making all the graphics the width of the text — often just as disastrous when inserting smaller images. Furthermore, after setting this option globally, trying to write height=<whatever> will have unexpected consequences!

Luckily for me, the TeX community is wide and wonderful, and Martin Scharrer has already implemented what I’m after (and more) in the adjustbox package. I’ll leave you to digest the rest of the manual on your own, but here’s a simple declaration in the preamble:

\adjustboxset{max size={\textwidth}{0.7\textheight}}

This enables the smart resizing that I’ve always wanted without any change to the rest of the document — large images are resized down but small images are left as they are. There are also max width and max height options in case you do not wish to set both as in the above example.

This may sound like a small thing but I iterate quickly when writing lectures with beamer, where I often use images that need shrinking. Having such an unobtrusive method to automatically avoid doing this manually each and every time will save me perhaps not appreciably in time but certainly in brain space. Thanks Martin!

Avoiding linebreaks before citations and things

When using a numerical bibliography style, it’s considered bad form (at least by me) to have a linebreak before the citation. You might see suggestions in LaTeX guides to write

... something profound~\cite{robertson2013}.

to avoid the citation number (‘[72]’, or whatever) ending up on its own at the beginning of a line.

This idea departs slightly from LaTeX’s philosophy of separating form and content, since you have to explicitly remember to insert the non-breaking space before each citation. I prefer to do this sort of thing automatically, so that I can write

... something profound \cite{robertson2013}.

and rest assured in the knowledge that I’ll have no lonely citations. Donald Arseneau’s cite package will do exactly that if you ask it to, although these days it’s unlikely you’re not using biblatex or similar which precludes the use of cite. The method used by cite is nice and general:

  \@tempskipa\lastskip \edef\@tempa{\the\@tempskipa}\unskip
  \ifnum\lastpenalty=\z@ \penalty\citeprepenalty \fi
  \ifx\@tempa\@zero@skip \spacefactor1001 \fi % if no space before, set flag
  \ifnum\spacefactor>\@m \ \else \hskip\@tempskipa \fi

(in inimitable Arseneau style). If you can follow this code then feel free to use it for your own documents. Were I writing a package to automatically insert nonbreaking spaces, I’d use something very similar.

But for me in my own documents, things are a little more simple. The easiest solution is


which I can write without even thinking and sometimes do. To be slightly more general, it would be nice if this code didn’t add a space if one wasn’t already there to begin with.

So here’s a command \nobreakbefore that examines whether there’s any previous space, and if there is some removes it and re-adds a nonbreaking space. It suits basically all of my needs, which is enough when I’m running against a deadline on my thesis:

      \ifdim\lastskip > 0pt\relax

Being able to automate this sort of thing is one of the reasons that I like LaTeX.

Use of § in refstyle

I’m a big fan of the refstyle package. (Before I knew it existed, I started writing something similar myself. I’m glad I found refstyle before it was too late!)

The refstyle package automates the use of cross references; while vanilla LaTeX would have us write Figure~\ref{xyz}, this is written in refstyle as \figref{xyz}. Far more flexible and this syntax lends itself to many clever extensions such as referring to ranges of figures with \figrangeref or multiple individual ones using comma-lists. (And sections, and chapters, and equations, etc.)

From memory, The LaTeX Companion does not mention refstyle; I don’t recommend the use of prettyref or fancyref these days, as they’re both very limited in comparison. There is a rival to refstyle named cleveref which I have not used; it has been actively developed for a number of years and is worth checking out.

I’ll talk about refstyle’s syntax vs cleveref another day, perhaps. If we were to chose one of them to emulate for a LaTeX3 package, which would we choose? I do not know. We’re not at that stage now, so I’ll put off thinking about it.

What I do want to discuss here is the use of the section sign, §. By default, writing

see \secref{foo} for foo and we know about bar already (\secref*{bar}).

results in the output

see section §1 for foo and we know about bar already (§2).

In this case, I’m (ab)using the default appearance of the section sign to use \secref* as a ‘short reference’ that’s nice and tidy within parentheses, while preferring to spell out ‘section’ explicitly in text within a sentence.

My PhD supervisor has pointed out to me, however, that writing ‘section §1’ is like writing ‘section section 1’ — not really the thing to do. Luckily refstyle provides hooks (namely, \ifRSstar) that allow us to define a reference that defaults to ‘section 1’ but shifts to ‘§1’ when the star form is used.

In the configuration file that defines the refcmd for sections, simply write

refcmd  = {\ifRSstar\S\fi\ref{#1}},

and we’re all set.

(At the same time, I switch all of the lowercase alternatives to uppercase so that cross-references are always ‘Section 1’ and so on — this is probably a regional preference.)

Removing subsection numbers in a ToC

Very behind on taking care of the Herries Press packages (PDF) which I’m maintaining. Apologies to all who have emailed with suggestions, comments, and questions, and to whom I’ve unfortunately not been able to respond.

Funny request today: how can I remove subsection numbers from the Table of Contents without removing their entries altogether? LaTeX’s tocdepth counter controls which entries will appear in the ToC, but has nothing to say about their formatting. Adjusting LaTeX’s ToC formatting yourself is of course possible, but the tocloft package makes things a bit easier.

Still, in this case it doesn’t provide an easy interface, since who in their right mind would want to number a subsection but not include that number in the ToC? (You can’t fight stylistic decisions like this sometimes—just go with the flow.) Even with tocloft, it’s still necessary to write some internal code — and caveat emptor as always in such a case because you never know when the package might change its internal commands behind your back!

In this case, the trick is to read the tocloft documentation and notice that it provides hooks for inserting material before and after the number of a section entry in the ToC. For subsections, these commands, respectively, are \cftsubsecpresnum and \cftsubsecaftersnum. E.g., write


to have a bang after every subsection number. (Presumably, a style or spacing change would be more common here.)

Aha! The intrepid LaTeX programmer at this point might suggest something like this:


That is, when the pre-hook before the number executes, read everything up to and including the after-hook, and do nothing with it. (If the definition was {#1} instead, then it would typeset as usual.)

Well, give it a try like I did, and you’ll discover it fails miserably. Back to the drawing board—time to read the package source. The issue stems from the fact that LaTeX uses one command only to typeset each line of a ToC, so this command must be completely general whether it’s a chapter or section or subsection that it’s dealing with.

Internally, the number in the ToC is typeset using something like

\@cftbsnum « the number itself » \@cftasnum

where \@cftnbsum and \@cftasnum are \let equal to their appropriate high level definitions \cft…presum and \cft…aftersnum as necessary for the context. Therefore, any look-ahead-and-gobble command must be defined specifically to look ahead for the internal version of the hook rather than the high-level name for it given in the documentation.

Et voila:


I still don’t think this is very useful, but it’s a nice trick.

Hooks are funny in this way. If you define them to be defined by commands with an argument, such as

\cftsubsecsnumhook{ « the actual number here » }

it’s hard to do parsing on it that involves scanning ahead for the contents of the number. On the other hand, in this case where the hook has the form where it’s surrounded by a pre-hook and a post-hook, it’s hard to grab the whole argument and do something with it since the internal post-hook might not be very user-accessible, as in the example above.

I don’t know if there’s a way to structure hooks in any better way, however. One possibility I can think of would be to have a generic ‘end hook’ token that could always be read until, such as

\cftsubsecpresnum « the actual number here » \cftendhook \cftsubsecaftersnum

where \cftendhook was simply a no-op. There would be some subtle issues with nesting that would be somewhat painful to work around.

Maybe it would be better to have the number braced as an argument and instead of having \cftsubsecpresnum be a no-op by default it could be an identity function: (\def\cftsubsecpresnum#1{#1})

\cftsubsecpresnum { « the actual number here » } \cftsubsecaftersnum

Does anyone know of any best practices here?

New babel release on CTAN

In May 2012, Javier Bezos wrote the following message to the LaTeX-L mailing list:

Babel gets back on track and it is again actively maintained. The goals are mainly to fix bugs, to make it compatible with XeTeX and LuaTeX (as far as possible), and perhaps to add some minor new features (provided they are backward compatible).

No attempt will be done to take full advantage of the features provided by XeTeX and LuaTeX, which would require a completely new core (as for example polyglossia or as part of LaTeX3).

In November of that year, a new alpha release was uploaded to CTAN, and in January 2013 a beta release was made. In the change notes to the beta, Javier wrote:

I hope the final version will be available by March/April.

Well, as of a few days ago, version 3.9b of babel is now available on CTAN. Read more about the release at Javier’s website.

It’s been around five years since the last update by Johannes Braams, who was babel’s original developer and who maintained it for many years, pre-dating my involvement with the TeX world. (Indeed, Javier himself has been a contributor to the LaTeX community longer than I.)

Babel has had a bit of a funny role in LaTeX, as it is not strictly an integral part of LaTeX2e itself, but until recently was tightly knit into LaTeX2e’s development. With active development of babel required to fix bugs and provide (some) support for XeTeX and LuaTeX, it was no longer appropriate to keep babel as part of the LaTeX2e release cycle.

Congratulations to Javier for continuing babel and successfully releasing an important new version. It’s people like this that keep the LaTeX world relevant in what seems like a quickly changing world.

It’s difficult to give up code you’ve lived with for many years, and I know it must have been a hard decision for Johannes to pass on the reigns to another. I know this because I’m staring at the prospect myself, with hundreds of unread emails from the last eighteen months, many of which are questions and suggestions on many of the LaTeX packages I maintain.

I’m not ready to give them up myself, but I will need to work hard this year to get back on track. I use people like Javier as inspiration to keep on shipping.

New version of pstool

Somewhat to my surprise, I have a new version of pstool to release. This is a LaTeX package I maintain that provide an easy workflow (I hope) for including psfrag graphics into pdfLaTeX documents. This allows, say, graphs from Matlab or Mathematica to be presented with fonts matching your document and using macros defined in your main preamble.

For context below, here’s a brief run-down of how it works. As a graphic is requested in the main document (using \psfragfig or similar), a check is made whether it has been converted into PDF in a previous compilation (and the source graphic hasn’t been updated in the mean time). If so, it saves some time and simply loads that graphic. If not, an entirely new LaTeX document is created with the preamble of the main document, but which otherwise contains only the graphic that needs to be processed. This document is run through the regular (‘old-fashioned’) latexdvipsps2pdf compilation workflow, which performs the wanted psfrag substitutions (and any other PostScript-based processing) and the final figure is cropped to size. Execution returns back in the main document and the PDF graphic can be inserted directly, and the compilation of the main document continues as normal.

The pstool package been invaluable to me in preparation of my PhD thesis, but I haven’t had the opportunity to work on the code for a long time. In fact, I discovered some code I’d written for it from 2010 (see below) that I’d never released to CTAN. Why a new package now, then? Because I hate it when my code breaks and wastes people’s time.

Windows ps2pdf breaking

Some time back, the maintainers of ps2pdf changed the way that options were passed on the command line under Windows, so where previously pstool was writing something like

ps2pdf "-dPDFSETTINGS=/prepress" "" "foo.pdf"

to generate the PDF file from the psfrag-generated PostScript file, the default settings were now terribly broken for recent users of MiKTeX, causing their documents to fail to compile. The syntax which now works correctly is

ps2pdf -dPDFSETTINGS#/prepress "" "foo.pdf"

Note # instead of = to set the options. In order to fix the issue, I decided to separate the fix from the user-level interface to pstool, which can set ps2pdf options with syntax like


Rather than requiring users to know to replace the = with \# (remembering that # is a special character than needs to be escaped), pstool automatically replaces all equals signs in the ps2pdf options with hash characters. This provides some level of platform-independence so that documents will compile correctly whether on Windows, Linux, or Mac OS X. Of course, it will still work correctly for Windows users that write \# anyway.

More robust preamble scanning

The second feature (and this is the code that was written in 2010 but never released) is a more robust scanning method for copying the main document’s preamble into the preamble for each externally processed graphic. In prior versions of the package, this was achieved by redefining \begin{document} but this method broke whenever a class or package did their own redefinition. The command \EndPreamble was provided to manually indicate where the preamble should end, but users would usually run across the error and not know where to look to fix the problem.

The changes for this feature are all internal and shouldn’t be noticeable if the code is working correctly; \EndPreamble still works for the cases where you don’t need the entire document preamble to be replicated for each graphic.

Cross-references and citations supported

The third major change for the new version of pstool is to support cross-references and citations in externally-processed graphics. This has been on my wish-list for some time, as I know that some people have used references and citations in their figures and graphs, and have had to use Rolf Niepraschk’s pst-pdf (or my wrapper package auto-pst-pdf) in this case. That solution is fine and perhaps still preferable in many cases, but if it could be done in pstool then I wanted to try and support it if possible.

It wasn’t entirely straightforward, but I think it now works. The short of it is: the main file’s .aux and .bbl files are now copied over for each graphic’s external compilation; the current page number is passed into the external graphic; and anything written to the .aux file by the external graphic is copied back into the main document’s .aux file. Copying the .bbl file is only necessary to support biblatex use, for reasons I’m still not entirely clear on.

What does this allow us to do now? If you have a document that defines several different equations and plots the results of those equations, it is possible to annotate the figure with the actual equation numbers cross-referenced from the document. Or even include a citation in the graph for some method that was described in another paper.

However, there’s a downside to all of this, which is that pstool graphics require multiple passes to resolve any cross-referencing. And this is not possible to detect automatically (well, it may be theoretically possible but I don’t think it’s easy). So the old approach of pstool to check the PDF and update the graphic only if necessary fails for those which use cross-references; in these cases, it is necessary to force their generation manually until everything resolves. This can be done on a per-graphic level by using \psfragfig*, or on a document level by loading the package with the [process=all] option.

This new version is still being tested on GitHub (please check it out if you’d like to help!) and I hope to upload it to CTAN in the next few days. I hope this release marks the beginning of somewhat of a return to the LaTeX community for me — last year was tumultuous for many reasons and I hope I can now re-dedicate some time every week.

Famous last words.

Here&#8217;s a photo I took of Donald Knuth at the TUG 2010 conference during his “Earthshaking Announcement”. I&#8217;m not sure exactly how it has been distributed (perhaps Facebook) but I see it&#8217;s now floating around the internet. I&#8217;m not a big stickler for attribution in this case—I&#8217;m quite happy the photo turned out this well at all, let alone good enough for others to use. In any event, I thought it would be a good idea to clear up who the original author of the photograph was.

For future reference, I&#8217;m happy for my photographs to be freely used with attribution. Should really start making more of them public…

Here’s a photo I took of Donald Knuth at the TUG 2010 conference during his “Earthshaking Announcement”. I’m not sure exactly how it has been distributed (perhaps Facebook) but I see it’s now floating around the internet. I’m not a big stickler for attribution in this case—I’m quite happy the photo turned out this well at all, let alone good enough for others to use. In any event, I thought it would be a good idea to clear up who the original author of the photograph was.

For future reference, I’m happy for my photographs to be freely used with attribution. Should really start making more of them public…

TUG 2012 in Boston

This year’s been a bit of a rollercoaster for being busy and trying to make decisions. After flipping-flopping over the last six months, I’ve decided at the last minute to attend TUG 2012 in Boston in July. (Thanks to Steve Peter, president, for the gentle nudge that finally sorted me out.)

TUG is the annual conference of the TeX Users Group, and I attended it for the first time in San Francisco in 2010. It was the best conference I’ve been to, and I’m expecting to have a great time in Boston as well. I’ve barely been to the east coast of the US, so I’m excited to explore the city in the days around the conference.

For my presentation, I’ll talk about LaTeX3 from a fairly high level, why it exists, why I work on it, and other assorted aspects of my somewhat-dormant TeX work. (Spoiler: without LaTeX3 I wouldn’t have been able to develop unicode-math; it would simply have been too much work.) If there’s anything in particularly you think I should address, feel free to drop me a line beforehand.

If you yourself are attending the conference, please say hi. What do people like to drink in Boston? I’ll buy you one.