LaTeX Alive
Auto-resize images that overfill a page

LaTeX’s support for graphics through the graphicx package is reliable and works well, but occasionally I find its options somewhat limiting. One thing that often causes me to stop and immediately recompile a document I’m working on happens when an image I’ve inserted is poorly sized and takes up far more space than it should:

In these sorts of situations I’ll immediately chuck in a quick [width=\textwidth] and merrily continue writing my document (or sometimes [height=0.7\textheight] in a beamer document, where vertical space is often compromised instead).

I’ve often thought of writing some code that wraps around \includegraphics that would check whether the width or height of an included graphic was too large and shrink down the image if and only if necessary. Note that while it is possible to specify

\usepackage{graphicx}
\setkeys{Gin}{width=\textwidth}

this has the side-effect of making all the graphics the width of the text — often just as disastrous when inserting smaller images. Furthermore, after setting this option globally, trying to write height=<whatever> will have unexpected consequences!

Luckily for me, the TeX community is wide and wonderful, and Martin Scharrer has already implemented what I’m after (and more) in the adjustbox package. I’ll leave you to digest the rest of the manual on your own, but here’s a simple declaration in the preamble:

\usepackage[Export]{adjustbox}
\adjustboxset{max size={\textwidth}{0.7\textheight}}

This enables the smart resizing that I’ve always wanted without any change to the rest of the document — large images are resized down but small images are left as they are. There are also max width and max height options in case you do not wish to set both as in the above example.

This may sound like a small thing but I iterate quickly when writing lectures with beamer, where I often use images that need shrinking. Having such an unobtrusive method to automatically avoid doing this manually each and every time will save me perhaps not appreciably in time but certainly in brain space. Thanks Martin!

Avoiding linebreaks before citations and things

When using a numerical bibliography style, it’s considered bad form (at least by me) to have a linebreak before the citation. You might see suggestions in LaTeX guides to write

... something profound~\cite{robertson2013}.

to avoid the citation number (‘[72]’, or whatever) ending up on its own at the beginning of a line.

This idea departs slightly from LaTeX’s philosophy of separating form and content, since you have to explicitly remember to insert the non-breaking space before each citation. I prefer to do this sort of thing automatically, so that I can write

... something profound \cite{robertson2013}.

and rest assured in the knowledge that I’ll have no lonely citations. Donald Arseneau’s cite package will do exactly that if you ask it to, although these days it’s unlikely you’re not using biblatex or similar which precludes the use of cite. The method used by cite is nice and general:

\def\cite@adjust{\begingroup%
  \@tempskipa\lastskip \edef\@tempa{\the\@tempskipa}\unskip
  \ifnum\lastpenalty=\z@ \penalty\citeprepenalty \fi
  \ifx\@tempa\@zero@skip \spacefactor1001 \fi % if no space before, set flag
  \ifnum\spacefactor>\@m \ \else \hskip\@tempskipa \fi
  \endgroup}

(in inimitable Arseneau style). If you can follow this code then feel free to use it for your own documents. Were I writing a package to automatically insert nonbreaking spaces, I’d use something very similar.

But for me in my own documents, things are a little more simple. The easiest solution is

\let\oldcite\cite
\renewcommand\cite{\unskip~\cite}

which I can write without even thinking and sometimes do. To be slightly more general, it would be nice if this code didn’t add a space if one wasn’t already there to begin with.

So here’s a command \nobreakbefore that examines whether there’s any previous space, and if there is some removes it and re-adds a nonbreaking space. It suits basically all of my needs, which is enough when I’m running against a deadline on my thesis:

\def\nobreakbefore{%
  \relax\ifvmode\else
    \ifhmode
      \ifdim\lastskip > 0pt\relax
        \unskip\nobreakspace
      \fi
    \fi
  \fi
}
\let\oldcite\cite
\renewcommand\cite{\nobreakbefore\oldcite}

Being able to automate this sort of thing is one of the reasons that I like LaTeX.

Use of § in refstyle

I’m a big fan of the refstyle package. (Before I knew it existed, I started writing something similar myself. I’m glad I found refstyle before it was too late!)

The refstyle package automates the use of cross references; while vanilla LaTeX would have us write Figure~\ref{xyz}, this is written in refstyle as \figref{xyz}. Far more flexible and this syntax lends itself to many clever extensions such as referring to ranges of figures with \figrangeref or multiple individual ones using comma-lists. (And sections, and chapters, and equations, etc.)

From memory, The LaTeX Companion does not mention refstyle; I don’t recommend the use of prettyref or fancyref these days, as they’re both very limited in comparison. There is a rival to refstyle named cleveref which I have not used; it has been actively developed for a number of years and is worth checking out.

I’ll talk about refstyle’s syntax vs cleveref another day, perhaps. If we were to chose one of them to emulate for a LaTeX3 package, which would we choose? I do not know. We’re not at that stage now, so I’ll put off thinking about it.

What I do want to discuss here is the use of the section sign, §. By default, writing

see \secref{foo} for foo and we know about bar already (\secref*{bar}).

results in the output

see section §1 for foo and we know about bar already (§2).

In this case, I’m (ab)using the default appearance of the section sign to use \secref* as a ‘short reference’ that’s nice and tidy within parentheses, while preferring to spell out ‘section’ explicitly in text within a sentence.

My PhD supervisor has pointed out to me, however, that writing ‘section §1’ is like writing ‘section section 1’ — not really the thing to do. Luckily refstyle provides hooks (namely, \ifRSstar) that allow us to define a reference that defaults to ‘section 1’ but shifts to ‘§1’ when the star form is used.

In the configuration file that defines the refcmd for sections, simply write

refcmd  = {\ifRSstar\S\fi\ref{#1}},

and we’re all set.

(At the same time, I switch all of the lowercase alternatives to uppercase so that cross-references are always ‘Section 1’ and so on — this is probably a regional preference.)

Removing subsection numbers in a ToC

Very behind on taking care of the Herries Press packages (PDF) which I’m maintaining. Apologies to all who have emailed with suggestions, comments, and questions, and to whom I’ve unfortunately not been able to respond.

Funny request today: how can I remove subsection numbers from the Table of Contents without removing their entries altogether? LaTeX’s tocdepth counter controls which entries will appear in the ToC, but has nothing to say about their formatting. Adjusting LaTeX’s ToC formatting yourself is of course possible, but the tocloft package makes things a bit easier.

Still, in this case it doesn’t provide an easy interface, since who in their right mind would want to number a subsection but not include that number in the ToC? (You can’t fight stylistic decisions like this sometimes—just go with the flow.) Even with tocloft, it’s still necessary to write some internal code — and caveat emptor as always in such a case because you never know when the package might change its internal commands behind your back!

In this case, the trick is to read the tocloft documentation and notice that it provides hooks for inserting material before and after the number of a section entry in the ToC. For subsections, these commands, respectively, are \cftsubsecpresnum and \cftsubsecaftersnum. E.g., write

\renewcommand\cftsubsecaftersnum{!}

to have a bang after every subsection number. (Presumably, a style or spacing change would be more common here.)

Aha! The intrepid LaTeX programmer at this point might suggest something like this:

\def\cftsubsecpresnum#1\cftsubsecaftersnum{}

That is, when the pre-hook before the number executes, read everything up to and including the after-hook, and do nothing with it. (If the definition was {#1} instead, then it would typeset as usual.)

Well, give it a try like I did, and you’ll discover it fails miserably. Back to the drawing board—time to read the package source. The issue stems from the fact that LaTeX uses one command only to typeset each line of a ToC, so this command must be completely general whether it’s a chapter or section or subsection that it’s dealing with.

Internally, the number in the ToC is typeset using something like

\@cftbsnum « the number itself » \@cftasnum

where \@cftnbsum and \@cftasnum are \let equal to their appropriate high level definitions \cft…presum and \cft…aftersnum as necessary for the context. Therefore, any look-ahead-and-gobble command must be defined specifically to look ahead for the internal version of the hook rather than the high-level name for it given in the documentation.

Et voila:

\usepackage{tocloft}
\makeatletter
\def\cftsubsecpresnum#1\@cftasnum{}
\makeatother

I still don’t think this is very useful, but it’s a nice trick.


Hooks are funny in this way. If you define them to be defined by commands with an argument, such as

\cftsubsecsnumhook{ « the actual number here » }

it’s hard to do parsing on it that involves scanning ahead for the contents of the number. On the other hand, in this case where the hook has the form where it’s surrounded by a pre-hook and a post-hook, it’s hard to grab the whole argument and do something with it since the internal post-hook might not be very user-accessible, as in the example above.

I don’t know if there’s a way to structure hooks in any better way, however. One possibility I can think of would be to have a generic ‘end hook’ token that could always be read until, such as

\cftsubsecpresnum « the actual number here » \cftendhook \cftsubsecaftersnum

where \cftendhook was simply a no-op. There would be some subtle issues with nesting that would be somewhat painful to work around.

Maybe it would be better to have the number braced as an argument and instead of having \cftsubsecpresnum be a no-op by default it could be an identity function: (\def\cftsubsecpresnum#1{#1})

\cftsubsecpresnum { « the actual number here » } \cftsubsecaftersnum

Does anyone know of any best practices here?

New babel release on CTAN

In May 2012, Javier Bezos wrote the following message to the LaTeX-L mailing list:

Babel gets back on track and it is again actively maintained. The goals are mainly to fix bugs, to make it compatible with XeTeX and LuaTeX (as far as possible), and perhaps to add some minor new features (provided they are backward compatible).

No attempt will be done to take full advantage of the features provided by XeTeX and LuaTeX, which would require a completely new core (as for example polyglossia or as part of LaTeX3).

In November of that year, a new alpha release was uploaded to CTAN, and in January 2013 a beta release was made. In the change notes to the beta, Javier wrote:

I hope the final version will be available by March/April.

Well, as of a few days ago, version 3.9b of babel is now available on CTAN. Read more about the release at Javier’s website.

It’s been around five years since the last update by Johannes Braams, who was babel’s original developer and who maintained it for many years, pre-dating my involvement with the TeX world. (Indeed, Javier himself has been a contributor to the LaTeX community longer than I.)

Babel has had a bit of a funny role in LaTeX, as it is not strictly an integral part of LaTeX2e itself, but until recently was tightly knit into LaTeX2e’s development. With active development of babel required to fix bugs and provide (some) support for XeTeX and LuaTeX, it was no longer appropriate to keep babel as part of the LaTeX2e release cycle.

Congratulations to Javier for continuing babel and successfully releasing an important new version. It’s people like this that keep the LaTeX world relevant in what seems like a quickly changing world.


It’s difficult to give up code you’ve lived with for many years, and I know it must have been a hard decision for Johannes to pass on the reigns to another. I know this because I’m staring at the prospect myself, with hundreds of unread emails from the last eighteen months, many of which are questions and suggestions on many of the LaTeX packages I maintain.

I’m not ready to give them up myself, but I will need to work hard this year to get back on track. I use people like Javier as inspiration to keep on shipping.

New version of pstool

Somewhat to my surprise, I have a new version of pstool to release. This is a LaTeX package I maintain that provide an easy workflow (I hope) for including psfrag graphics into pdfLaTeX documents. This allows, say, graphs from Matlab or Mathematica to be presented with fonts matching your document and using macros defined in your main preamble.

For context below, here’s a brief run-down of how it works. As a graphic is requested in the main document (using \psfragfig or similar), a check is made whether it has been converted into PDF in a previous compilation (and the source graphic hasn’t been updated in the mean time). If so, it saves some time and simply loads that graphic. If not, an entirely new LaTeX document is created with the preamble of the main document, but which otherwise contains only the graphic that needs to be processed. This document is run through the regular (‘old-fashioned’) latexdvipsps2pdf compilation workflow, which performs the wanted psfrag substitutions (and any other PostScript-based processing) and the final figure is cropped to size. Execution returns back in the main document and the PDF graphic can be inserted directly, and the compilation of the main document continues as normal.

The pstool package been invaluable to me in preparation of my PhD thesis, but I haven’t had the opportunity to work on the code for a long time. In fact, I discovered some code I’d written for it from 2010 (see below) that I’d never released to CTAN. Why a new package now, then? Because I hate it when my code breaks and wastes people’s time.

Windows ps2pdf breaking

Some time back, the maintainers of ps2pdf changed the way that options were passed on the command line under Windows, so where previously pstool was writing something like

ps2pdf "-dPDFSETTINGS=/prepress" "foo.ps" "foo.pdf"

to generate the PDF file from the psfrag-generated PostScript file, the default settings were now terribly broken for recent users of MiKTeX, causing their documents to fail to compile. The syntax which now works correctly is

ps2pdf -dPDFSETTINGS#/prepress "foo.ps" "foo.pdf"

Note # instead of = to set the options. In order to fix the issue, I decided to separate the fix from the user-level interface to pstool, which can set ps2pdf options with syntax like

\usepackage[
    ps2pdf-options={-dPDFSETTINGS=/prepress}
]{pstool} 

Rather than requiring users to know to replace the = with \# (remembering that # is a special character than needs to be escaped), pstool automatically replaces all equals signs in the ps2pdf options with hash characters. This provides some level of platform-independence so that documents will compile correctly whether on Windows, Linux, or Mac OS X. Of course, it will still work correctly for Windows users that write \# anyway.

More robust preamble scanning

The second feature (and this is the code that was written in 2010 but never released) is a more robust scanning method for copying the main document’s preamble into the preamble for each externally processed graphic. In prior versions of the package, this was achieved by redefining \begin{document} but this method broke whenever a class or package did their own redefinition. The command \EndPreamble was provided to manually indicate where the preamble should end, but users would usually run across the error and not know where to look to fix the problem.

The changes for this feature are all internal and shouldn’t be noticeable if the code is working correctly; \EndPreamble still works for the cases where you don’t need the entire document preamble to be replicated for each graphic.

Cross-references and citations supported

The third major change for the new version of pstool is to support cross-references and citations in externally-processed graphics. This has been on my wish-list for some time, as I know that some people have used references and citations in their figures and graphs, and have had to use Rolf Niepraschk’s pst-pdf (or my wrapper package auto-pst-pdf) in this case. That solution is fine and perhaps still preferable in many cases, but if it could be done in pstool then I wanted to try and support it if possible.

It wasn’t entirely straightforward, but I think it now works. The short of it is: the main file’s .aux and .bbl files are now copied over for each graphic’s external compilation; the current page number is passed into the external graphic; and anything written to the .aux file by the external graphic is copied back into the main document’s .aux file. Copying the .bbl file is only necessary to support biblatex use, for reasons I’m still not entirely clear on.

What does this allow us to do now? If you have a document that defines several different equations and plots the results of those equations, it is possible to annotate the figure with the actual equation numbers cross-referenced from the document. Or even include a citation in the graph for some method that was described in another paper.

However, there’s a downside to all of this, which is that pstool graphics require multiple passes to resolve any cross-referencing. And this is not possible to detect automatically (well, it may be theoretically possible but I don’t think it’s easy). So the old approach of pstool to check the PDF and update the graphic only if necessary fails for those which use cross-references; in these cases, it is necessary to force their generation manually until everything resolves. This can be done on a per-graphic level by using \psfragfig*, or on a document level by loading the package with the [process=all] option.


This new version is still being tested on GitHub (please check it out if you’d like to help!) and I hope to upload it to CTAN in the next few days. I hope this release marks the beginning of somewhat of a return to the LaTeX community for me — last year was tumultuous for many reasons and I hope I can now re-dedicate some time every week.

Famous last words.

Here&#8217;s a photo I took of Donald Knuth at the TUG 2010 conference during his “Earthshaking Announcement”. I&#8217;m not sure exactly how it has been distributed (perhaps Facebook) but I see it&#8217;s now floating around the internet. I&#8217;m not a big stickler for attribution in this case—I&#8217;m quite happy the photo turned out this well at all, let alone good enough for others to use. In any event, I thought it would be a good idea to clear up who the original author of the photograph was.

For future reference, I&#8217;m happy for my photographs to be freely used with attribution. Should really start making more of them public…

Here’s a photo I took of Donald Knuth at the TUG 2010 conference during his “Earthshaking Announcement”. I’m not sure exactly how it has been distributed (perhaps Facebook) but I see it’s now floating around the internet. I’m not a big stickler for attribution in this case—I’m quite happy the photo turned out this well at all, let alone good enough for others to use. In any event, I thought it would be a good idea to clear up who the original author of the photograph was.

For future reference, I’m happy for my photographs to be freely used with attribution. Should really start making more of them public…

TUG 2012 in Boston

This year’s been a bit of a rollercoaster for being busy and trying to make decisions. After flipping-flopping over the last six months, I’ve decided at the last minute to attend TUG 2012 in Boston in July. (Thanks to Steve Peter, president, for the gentle nudge that finally sorted me out.)

TUG is the annual conference of the TeX Users Group, and I attended it for the first time in San Francisco in 2010. It was the best conference I’ve been to, and I’m expecting to have a great time in Boston as well. I’ve barely been to the east coast of the US, so I’m excited to explore the city in the days around the conference.

For my presentation, I’ll talk about LaTeX3 from a fairly high level, why it exists, why I work on it, and other assorted aspects of my somewhat-dormant TeX work. (Spoiler: without LaTeX3 I wouldn’t have been able to develop unicode-math; it would simply have been too much work.) If there’s anything in particularly you think I should address, feel free to drop me a line beforehand.

If you yourself are attending the conference, please say hi. What do people like to drink in Boston? I’ll buy you one.

Lucida Math OpenType

As part of the next incarnation of the Lucida typefaces, I&#8217;ve been testing out the OpenType versions of the maths fonts.

It&#8217;s fair to say that most people will have seen a Lucida font in one form or another. Lucida has been very popular in the past as one of the very few commercial and unique maths fonts for TeX. It is a super-family of fonts with more font faces than I&#8217;m aware of in any other such collection, including serif, sans serif, typewriter, script, blackletter, handwriting, casual, fax (a sturdier serif), and symbols or wingdings. It has been distributed very widely on various computer systems, with Lucida Grande used in Mac OS X itself (the menu font, among other uses), and Java and Windows shipping the standard serif/sans/mono trio for many years.

For more information about the Lucida fonts and their new release, Ulrik Vieth and Mojca Miklavec have just published a TUGboat paper with all the gossip. (Link accessible for TUG members only until it becomes open access in one year.)
Further information can also be found on TUG&#8217;s own site for the font and an overview of the fonts written by the font designers, Charles Bigelow and Kris Holmes.

Due to my particular interest in Unicode maths, I&#8217;m especially excited to see a new OpenType maths font on the scene. I like to think that we&#8217;re entering into somewhat of a golden age of maths font design, as OpenType provides a mechanism by which both TeX users and GUI apps such as Microsoft Word can use them; in times past, it was simply rarely worth the expense of creating a new TeX maths font and this is evident by the relative paucity of them.

After the Lucida release later this year, available OpenType maths fonts will include:

Latin Modern Math (1347)
XITS Math (2428) 
Cambria Math v1.0 (1592) 
Lucida Math (1947)
Lucida Math Demibold (877) 
Asana Math (2240)
Neo Euler (411)
shown with approximate symbol counts for each font (based on symbols defined in unicode-math, not by literal glyph count). The STIX fonts are the reference, here, and contain the largest number of symbols; I believe recent versions of Cambria Math have more than is listed above. But what is impressive to see is that not only does Lucida have a large glyph coverage already, but it will also be provided with a ‘bold’ version of the font as well and, as far as I know, will be the first OpenType maths font to offer this.

To facilitate the use of the bold font, unicode-math can now load multiple maths fonts simultaneously through LaTeX&#8217;s \mathversion command (although this new feature hasn&#8217;t been widely tested), which is shown in the image above.
Note in the image above the integral sign for the bold example isn&#8217;t scaling yet; this will be rectified of course for the release version of the fonts.

I&#8217;m very pleased to be able to play whatever small role I can in bringing these fonts to new audiences. Many thanks to Charles Bigelow and Kris Holmes for working on the new fonts and everyone involved in the OpenType transition but particularly the seemingly tireless Khaled Hosny. Great work!

Lucida Math OpenType

As part of the next incarnation of the Lucida typefaces, I’ve been testing out the OpenType versions of the maths fonts.

It’s fair to say that most people will have seen a Lucida font in one form or another. Lucida has been very popular in the past as one of the very few commercial and unique maths fonts for TeX. It is a super-family of fonts with more font faces than I’m aware of in any other such collection, including serif, sans serif, typewriter, script, blackletter, handwriting, casual, fax (a sturdier serif), and symbols or wingdings. It has been distributed very widely on various computer systems, with Lucida Grande used in Mac OS X itself (the menu font, among other uses), and Java and Windows shipping the standard serif/sans/mono trio for many years.

For more information about the Lucida fonts and their new release, Ulrik Vieth and Mojca Miklavec have just published a TUGboat paper with all the gossip. (Link accessible for TUG members only until it becomes open access in one year.) Further information can also be found on TUG’s own site for the font and an overview of the fonts written by the font designers, Charles Bigelow and Kris Holmes.

Due to my particular interest in Unicode maths, I’m especially excited to see a new OpenType maths font on the scene. I like to think that we’re entering into somewhat of a golden age of maths font design, as OpenType provides a mechanism by which both TeX users and GUI apps such as Microsoft Word can use them; in times past, it was simply rarely worth the expense of creating a new TeX maths font and this is evident by the relative paucity of them.

After the Lucida release later this year, available OpenType maths fonts will include:

  • Latin Modern Math (1347)
  • XITS Math (2428)
  • Cambria Math v1.0 (1592)
  • Lucida Math (1947)
  • Lucida Math Demibold (877)
  • Asana Math (2240)
  • Neo Euler (411)

shown with approximate symbol counts for each font (based on symbols defined in unicode-math, not by literal glyph count). The STIX fonts are the reference, here, and contain the largest number of symbols; I believe recent versions of Cambria Math have more than is listed above. But what is impressive to see is that not only does Lucida have a large glyph coverage already, but it will also be provided with a ‘bold’ version of the font as well and, as far as I know, will be the first OpenType maths font to offer this.

To facilitate the use of the bold font, unicode-math can now load multiple maths fonts simultaneously through LaTeX’s \mathversion command (although this new feature hasn’t been widely tested), which is shown in the image above. Note in the image above the integral sign for the bold example isn’t scaling yet; this will be rectified of course for the release version of the fonts.

I’m very pleased to be able to play whatever small role I can in bringing these fonts to new audiences. Many thanks to Charles Bigelow and Kris Holmes for working on the new fonts and everyone involved in the OpenType transition but particularly the seemingly tireless Khaled Hosny. Great work!

Non-textual tabular requirements

I once wrote a procedure for drawing tabulars with square cells; it was one of my earliest experiences with LaTeX programming, actually. When I&#8217;d done so, I received a comment ‘why doesn&#8217;t LaTeX allow this easily’?

Well, I wondered, why not? My feelings at the time (echoed a little today) were that LaTeX is a tool for writing largely technical documents, and such specific requirements fall outside its regular bounds of ways to typeset tabular material. (Forget, for the minute, that LaTeX&#8217;s tables are pretty ugly by default; I&#8217;m assuming everybody uses the booktabs package.)

I still agree with the author of booktabs on this matter: ‘It is not going too far to say that if you cannot create a table using the commands in this package, you should redesign it.’ However, I do admit that things like tables with coloured rows and so on do have their uses.

Just recently Karl Berry mentioned he wanted to typeset a grid of images with a large image in the centre. (Not unlike what you see at the top of the page. Just replace the coloured boxes by real images.) This isn&#8217;t something that LaTeX does out of the box, and I&#8217;m not sure, actually, if any third package can do it either. (I tried but failed with multirow.)

Spurred on by the requisite ConTeXt example that ‘just works’, my own attempt to implement this arrangement turned out to be quite easy but not exactly straightforward.
Let me know if there&#8217;s a better way.

First of all, the input syntax:

\begin{tighttabular}{@{}c@{}c@{}c@{}c@{}c@{}c@{}}
\1&amp;\1&amp;\1&amp;\1&amp;\1&amp;\1\\
\1&amp;\1&amp;\1&amp;\1&amp;\1&amp;\1\\
\1&amp;\1&amp;  &amp;  &amp;\1&amp;\1\\
\1&amp;\1&amp;\9&amp;  &amp;\1&amp;\1\\
\1&amp;\1&amp;\1&amp;\1&amp;\1&amp;\1\\
\1&amp;\1&amp;\1&amp;\1&amp;\1&amp;\1\\
\end{tighttabular}


Notice that my approach simply puts the material in the lower-left cell in order to fill in the space taken up by the others. This would not work if the cells were of unknown and uneven sizes.

The definition of tighttabular is easy; just define \arraystretch to zero, locally:

\newenvironment{tighttabular}{%
  \def\arraystretch{0}%
  \begin{tabular}%
}{%
  \end{tabular}%
}


How do we place material in that box \9 so it comes out with the correct alignment? Actually, it&#8217;s not that bad:

\def\9{%
  \rlap{\smash{\largebox}}%
}


The \rlap first removes the horizontal width, and the \smash removes the vertical height. This is done so that the cell that holds \9 takes up only the same amount of space as the other cells around it. (Otherwise, they would stretch to fit, distorting the size and alignment of the tabular.)

Finally, what is \1 and how do you get the colours to do that?

\def\1{\smallbox}
\def\smallbox{\color{blah!!+}\rule{2cm}{2cm}}
\def\largebox{\scalebox{2}{\smallbox}}


(\scalebox requires the graphicx package.)
These boxes use the xcolor package&#8217;s very convenient ‘colour series’ feature:

\usepackage{xcolor}
\definecolorseries{blah}{hsb}{step}[hsb]{.5,1,1}{.1,-.05,0}
\resetcolorseries{blah}


And that&#8217;s it. Whether you think this whole approach is nice and straightforward or horribly arcane will be somewhat of a personal decision. We&#8217;re still awaiting the one tabular package to replace all others in the LaTeX world, although I&#8217;m pleased to see recent efforts moving towards providing a complete interface to the different methods supported by the various LaTeX third-party packages in this area.

(With my LaTeX3 hat on: no, as far as I know we&#8217;ve not even begun thinking about how this might be dealt with there. I&#8217;m not an expert in this area. Although I will say that I mildly dislike both LaTeX&#8217;s &amp; and ConTeXt&#8217;s \bTR…\eTR syntax; for me, the former is too close to the metal and the latter too verbose.)

Non-textual tabular requirements

I once wrote a procedure for drawing tabulars with square cells; it was one of my earliest experiences with LaTeX programming, actually. When I’d done so, I received a comment ‘why doesn’t LaTeX allow this easily’?

Well, I wondered, why not? My feelings at the time (echoed a little today) were that LaTeX is a tool for writing largely technical documents, and such specific requirements fall outside its regular bounds of ways to typeset tabular material. (Forget, for the minute, that LaTeX’s tables are pretty ugly by default; I’m assuming everybody uses the booktabs package.)

I still agree with the author of booktabs on this matter: ‘It is not going too far to say that if you cannot create a table using the commands in this package, you should redesign it.’ However, I do admit that things like tables with coloured rows and so on do have their uses.

Just recently Karl Berry mentioned he wanted to typeset a grid of images with a large image in the centre. (Not unlike what you see at the top of the page. Just replace the coloured boxes by real images.) This isn’t something that LaTeX does out of the box, and I’m not sure, actually, if any third package can do it either. (I tried but failed with multirow.)

Spurred on by the requisite ConTeXt example that ‘just works’, my own attempt to implement this arrangement turned out to be quite easy but not exactly straightforward. Let me know if there’s a better way.

First of all, the input syntax:

\begin{tighttabular}{@{}c@{}c@{}c@{}c@{}c@{}c@{}}
\1&\1&\1&\1&\1&\1\\
\1&\1&\1&\1&\1&\1\\
\1&\1&  &  &\1&\1\\
\1&\1&\9&  &\1&\1\\
\1&\1&\1&\1&\1&\1\\
\1&\1&\1&\1&\1&\1\\
\end{tighttabular}

Notice that my approach simply puts the material in the lower-left cell in order to fill in the space taken up by the others. This would not work if the cells were of unknown and uneven sizes.

The definition of tighttabular is easy; just define \arraystretch to zero, locally:

\newenvironment{tighttabular}{%
  \def\arraystretch{0}%
  \begin{tabular}%
}{%
  \end{tabular}%
}

How do we place material in that box \9 so it comes out with the correct alignment? Actually, it’s not that bad:

\def\9{%
  \rlap{\smash{\largebox}}%
}

The \rlap first removes the horizontal width, and the \smash removes the vertical height. This is done so that the cell that holds \9 takes up only the same amount of space as the other cells around it. (Otherwise, they would stretch to fit, distorting the size and alignment of the tabular.)

Finally, what is \1 and how do you get the colours to do that?

\def\1{\smallbox}
\def\smallbox{\color{blah!!+}\rule{2cm}{2cm}}
\def\largebox{\scalebox{2}{\smallbox}}

(\scalebox requires the graphicx package.) These boxes use the xcolor package’s very convenient ‘colour series’ feature:

\usepackage{xcolor}
\definecolorseries{blah}{hsb}{step}[hsb]{.5,1,1}{.1,-.05,0}
\resetcolorseries{blah}

And that’s it. Whether you think this whole approach is nice and straightforward or horribly arcane will be somewhat of a personal decision. We’re still awaiting the one tabular package to replace all others in the LaTeX world, although I’m pleased to see recent efforts moving towards providing a complete interface to the different methods supported by the various LaTeX third-party packages in this area.

(With my LaTeX3 hat on: no, as far as I know we’ve not even begun thinking about how this might be dealt with there. I’m not an expert in this area. Although I will say that I mildly dislike both LaTeX’s & and ConTeXt’s \bTR\eTR syntax; for me, the former is too close to the metal and the latter too verbose.)