Monday, October 29, 2012

jQuery UI selectmenu and the change event

Many moons ago, the Filament Group created a jQuery UI component called Selectmenu.

Then Felix Nagel forked it, and made improvements!

If you're using jQuery UI 1.8.X or older, look at the links from Felix Nagels forked version.  If you're using jQuery UI 1.10.X, it should be available officially within jQuery UI.  However, at the time of writing, jQuery UI 1.9 is what is in the wild, so you have to cherry pick from Github, and not quite get the full benefit of Themeroller.

Selectmenu has a few ways of doing things that are a bit counter intuitive to the way you'd expect things to be done.  This is mostly because it is made up of existing components.

At some point, to set the widget to an existing value in the select list, you needed to do this

$("#myselect").selectmenu("value", value);

However, these days you can get away with

$("#myselect").val(value);

And if you've altered the options on the select list, you may also want to follow up with

$("#myselect").selectmenu("refresh");

One of the counter intuitive actions about the component is handling of the change event.  With a native select element, you can do any of the following, depending on your jQuery version and preference:

$("#myselect").change(function() {});
$("#myselect").bind("change", function() {});
$("#myselect").on("change", function() {});

But given the Selectmenu component, as it stands, you need to do this:

$("#myselect").selectmenu({change: function() {}});

But this doesn't allow for event propagation.  So here's what I've started doing to allow event propagation, whenever I set up a Selectmenu instance.

var changeSelectMenu = function(event, item) {
    $(this).trigger('change', item);
};
$("#myselect").selectmenu({change:changeSelectMenu});

And now I can use any of the previous methods to implement change handling, and have multiple handlers for the event.

Caveat: I'm a scoping noob. Usage of $(this).trigger() may be bad. My intention is that it should target the element that the component is attached to.  Perhaps I should even be using the event to get the event name, rather than hard code 'change', and then I could call the function variable propagateEvent.  Maybe there is even a better way to propagate the original event, rather than start a new one.

I would rather that the component set this behaviour up by default, but there's probably something intrinsically wrong about it that made it seem like a bad idea.

Monday, September 17, 2012

Bye Bye, IE8

On one of my rare visits to Twitter, I did spot a fantastic bit of news.  With the release of IE10 on 26/10/2012, Google Apps will be dropping support for IE8 from 15/11/2012.

So where does that leave the various operating systems that run a version of Internet Explorer?

For Windows XP SP3, support for your operating system ends on April 8, 2014.  If you were running the 64 bit version of Windows XP, then the latest release was SP2, and support for that ended on July 13, 2010.  You can't upgrade to IE9, so your best bet for a modern standards compliant browser is Chrome or Firefox.

For Windows Vista, it's a bit trickier.  Current documentation generally states April 11, 2012.  But support for SP2 ends 24 months after the release or at the end of the products support lifecycle, whichever comes first.  Considering that SP2 was released on April 29, 2009, the support should have ended April 29, 2011. Either way, you can still use IE9 to get some HTML5 functionality, but if you're going to hold on to your Vista release for as long as some folks are holding on to their XP releases, you also might consider installing Chrome or Firefox.

For Windows 7, mainstream support is expected until January 13, 2015 and extended support ends January 14, 2020.  You'll also be able to upgrade to IE10, but frankly, why should you have to wait until a major release of a browser that is really aimed at another operating system, before you can get access to the features of a modern standards compliant browser.  That's right, get Chrome or Firefox anyway.

For Windows 8 users, I guess you're running the Release Preview at the moment, and will be able to get the full retail version on October 26, 2012, the same day that IE10 is released.  If you're sticking to IE, then you can expect two major upgrades to your web browser with the next two releases of Windows at approximately 3 year intervals. Or, you can start using Chrome or Firefox now, enable the sync feature and have extensions install automatically to your Windows 8 installation of the browser.

I haven't covered older releases of operating systems or servers here, but things are getting pretty grim if you're still running Windows 2000, or Windows Server 2003.  I don't think Chrome is supported for those operating systems, and details for the latest release of Firefox are hard to find. Firefox 14 is supported for Windows Server 2003, but it's not the latest release at the time of writing.  The link provided is for Firefox 14, at the time of writing.

Personally, I'm sticking with Chrome.  It's proven to be pretty good for my development needs, but I always keep an installation of IE and Firefox around.  I should also keep an install of Safari for Windows around as well, for times when I want to run different logins to the same development site at the same time, and maintain the consistency of WebKit rendering.  I'll probably upgrade my IE9 install to IE10, and hope that it maintains the render as features that IE9 does (or I'll check someone elses IE10 install first).
-------
Update: Went looking for Safari for Windows, but it seems that Apple pulled support and all but one link for Safari 5.1.7 for Windows. Maybe I'll use Opera instead.  When I need to.

Friday, July 20, 2012

In defense of the Go card

In Brisbane, the local transport authority has issued a card that allows people to use the public transport system without having to tender cash at stations and bus stops, and allows swift entry and exits from train stations, buses and ferries without the need for a manual ticket inspection. It's been around for a few years.

In Brisbane, it's called a Go card. In London, it's called an Oyster card. Other places around the world have probably adopted something similar.

The usage is fairly simple. When you arrive at a train station, or board a bus or ferry, you touch the card against a round disc called a Go card reader. It tells you how much money you have left on the card. Before you leave a train station, or leave a bus or ferry, you touch the card again. It will tell you how much the fare was, and how much money you have left on the card. With the bus system, you have to wait until the onboard computer is sync'd with the central system before you can see your journeys online.

The tech isn't without it's faults. Typically, I catch the train into work, but a few weeks ago I decided to catch the bus. The bus is closer to my house, but counts for three zones of travel, which attracts a price of $4.24 each way. When I catch the train, I drive for 15 minutes or so, and catch a train that is only two zones and costs $3.58.

Anyway, one morning on the way to work, the ticketing computer on the bus decided to continually reboot. This meant that anyone already on the bus was unable to swipe off on the way off the bus, and anyone getting on would be traveling for free. The side effect is, if you don't swipe off, your fare is not calculated until your next journey, and is usually considerably more than what the fare normally would have been. The bus driver mentioned it might be $10. It's easily fixable, since you can call up the helpline on the card, and get the fare adjusted. Looking at the online records, it turned out I got charged $5, instead of the normal $4.24. I wasn't too worried about that difference, so I left it.

But back to the main story.

The number of people leave the train platform at Central Station was a lot. Enough to make me impatient, anyway. So when the four people in front of me all have troubles with their cards swiping on the gates, I start to get a bit pissed off. They all had their cards tucked away in their wallets, and were just swiping their wallets.

The recommended usage is to remove the card from the wallet, and tough it to the reader. With this method, you remove the chance for interference from any other RFID devices you may have in your wallet, like your credit card.

After all, many credit cards and debit cards now have a swipe to use function. When you use those, you're careful to only swipe the card you want charged, not just waving your wallet at the device and hoping the device picks the right one.

So when the lady in front of me also has an issue with hers, and then snipes at the attendant, "Why don't these cards ever work?", I vocalize my displeasure with a retort, "Because you're supposed to remove the card from the wallet before swiping". The attendant indicates an agreement, but it's too late, we're already through with a second swipe from her, and a working first time swipe from me.

"But it always works", she remarks as we start down the crowded stairs. "Evidently not", I reply. My bloody is boiling at that point. The logical inconsistencies of her two previous statements have face rolled by buttons, but it looks like she's about the take the left exit in the underground tunnel, and I'm going to take the lesser used right exit.

"The man before me had his card in his wallet", is the last thing I hear her say as she makes her left turn, and I peal off to my right. The Oyster card I had when I was in the UK worked just fine in a single card holder issued by Oyster themselves. I've seen similar holders issued for Go cards, but didn't happen to get one when I picked my Go card up.

Thinking on the engagement as I walked to the elevators, I found it most odd that I was defending the Go card, a convenient payment mechanism for what is touted to be the third most expensive public transport system in the world.

If you're a Brisbane resident and use public transport, or are trying to weigh up if public transport is worth it, BrizCommuter is recommended reading.

Thursday, July 19, 2012

Integrated Authentication in a Windows Development Environment

Here's a dry topic.  Integrated authentication in a windows development environment.

I've spent the better part of a day trying to find a way to do NTLM authentication via an Apache web server running on a Windows server.  The TL:DR is forget it.  Anything NTLM/Apache related is all written for Unix environments, where you have access to Samba and WinBind, or mod_perl and Apache2::AuthenNTLM, or mod_python and PyAuthenNTLM2 (if your python version is old enough).

In the end, I ended up using IIS7.5 (given that I'm using Windows 7 for my development), but it was still a bit of a pain in the ass to set up.

As you may have gleaned, my regular development environment is PHP and Apache2 (specifically PHP 5.3.14 and Apache 2.2.22 at the time of writing).  However, for this, I require IIS7.5 and a few other toys.

Here are the components to grab.  Some are available via the Microsoft Web Platform Installer, and some are available via the same beast, just not searchable within.

Even though I installed PHP 5.3.X for FastCGI via the Installer, I didn't end up using it.  I switched the event handler in IIS to use the same PHP version that I use with Apache, and the same php.ini.  However, it's still good to install the MS version, because it installs other handy things like the PHP Manager for IIS.

With IIS 7.5, the FastCGI module is not installed via the Installer.  It's available as a Windows feature, and can be located via Control Panel > Programs > Programs and Features > Turn Windows features on or off.  From here, you'll want to enable the following :
  • Internet Information Services > World Wide Web Services > 
    • Application Development Features > CGI
    • Security > Windows Authentication
As a basic test for authentication working, here is a sample program, lifted from here:

<?php
if (!isset($_SERVER['REMOTE_USER']) || $_SERVER['REMOTE_USER'] == '') {
    header('HTTP/1.1 401 Unauthorized');
    header('WWW-Authenticate: Negotiate', false);
    header('WWW-Authenticate: NTLM', false);
    exit;
} else {
    echo '<p>Identified as ' . $_SERVER['REMOTE_USER'] . '</p>';
}
echo phpinfo(); 
?>

The general approach is that the web server has anonymous access and windows authentication access enabled, and lets the application set the authentication challenge when it gets to bits that require authentication.

The first bit is enabling the authentication methods.  In IIS, visit Authentication for the Site.  You should see the different types of authentication you have installed which will include Anonymous Authentication and Windows Authentication at the very least.  If you don't, you may need to restart the IIS service, as well as the IIS UI.

Enable Anonymous Authentication.  IIS7.5 now provides a user called IUSR that can be used for anonymous authentication.  Previously, the anonymous user used to specify the machine name, which caused problems during application migration.  If you have a specific user you'd rather authenticate as, then feel free to use that or the Application pool identity.

Note: I'm not very good with IIS, so some of what I'm doing may not be best practice.

Enable Windows Authentication.  In Advanced Settings, I ended up using Accept for Extended Protection and Enabled Kernel-mode authentication.  I'm not sure that actually need Extended Protection, but it's turned on anyway.  In Providers, ensure that it says Negotiate and NTLM, in that order.

Because this is IIS, and it is integrated with Windows, make sure that your anonymous user has read permissions for your application, and write permissions for your log files and cache directories.

Because I'm accessing the application on the same machine that is hosting it, I needed to do a bit of a registry hack, as noted by Method 1 in http://support.microsoft.com/kb/896861.  I actually think Step 1 of Method 1 is a typo, so just start from Step 2.

And that was almost about it.  From then, when visiting the test site, I was able to specify my username as DOMAIN\username and password, and it would authenticate me.  For extra bonus points, I added the site as a Trusted Site to IE, and it didn't even need to prompt me.  Firefox has a property that does a similar thing (network.automatic-ntlm-auth.trusted-uris).  Chrome (at as 20.0) apparently uses whatever IE does in a Windows environment, so once it is in for IE, it's in for Chrome.  Not sure about Unix and Mac users, but Settings > Show advanced settings... > Network > Change proxy settings... might get you closer.  Windows users will then select Security and can set up their trusted sites from there.  Not sure where Firefox is at with supporting Windows Group Policy these days, but I guess that Chrome will use whatever IE uses.

Hopefully, that will get you up and running.

Note: If you are prompted for a password, perhaps because the site is not set up as a trusted site, it would be recommended to host under HTTPS, since the password will be sent as plain text.  As yet, I'm not sure if there are any incompatibilities with HTTPS and NTLM, if the site is trusted.


Friday, May 25, 2012

SitePen dgrid From Html and Footers

I'm still fairly new with using Javascript, so what I'm showing today is a way, but perhaps not the best way.

And that way is turn a simple HTML table into something nice using SitePen's dgrid, and adding a footer showing totals.

There are some basics you might want to be familiar with first, like Dojo (1.7.2 at the time of writing), using AMD, and perhaps using the SitePen dgrid.

I was doing a screen that has a whole bunch of HTML tables on it. And I thought that the Claro theme looked so nice, but the dojoTabular style was so out of place, that it would be nice to have the data styled like it was in a dgrid. Turns out, it wasn't as hard as I thought it might be. dgrid provides an awesome class that will convert the basics for a HTML table to a dgrid, called dgrid/GridFromHtml.

Since this was simple tabled data, and I didn't feel like creating a store for every table, I stored the data as a JSON value in a hidden form variable. In my Javascript, I would load that data, create the grid from the HTML, and use renderArray to populate the data.

Here's a small sample of what I did.

This is the HTML, though I've not bothered to populate the hidden element with actual JSON data. Just make it an array of entries, where each entry has columns that will match the column headers in the table.

My Table

Id Name Value

For your convenience, a simple function to convert the table to a grid.
function loadTable(dataId, id, options) {
    var data = dom.byId(dataId);
    if (data) {
        data = data.value;
        
        data = JSON.parse(data);
        
        var grid = new GridFromHtml(options, id);
        grid.renderArray(data);
            
        return grid;
    }
    return null;
}

And when you're ready, call that function.
   var tableGrid = loadTable('tableData', 'tableGrid');

That was fairly painless.

Now for the fun part that I wanted to show off. Adding a footer to display a total. I'll reiterate that this is a way, not the best way.

The basic idea is to prepare an entry similar to the data entries, render a row, and put it in the footer. It uses the same basic formula of putting that data into a hidden HTML element, loading it, parsing it, and rendering it.

I've added an entry to store the totals data. Leave the first two columns blank, and populate the total in the value column.

My Table

....

Now turn on the footer, load the footer, render it, and resize so the grid body can apply the right styles and not overwrite the last row in the grid.

   var tableGrid = loadTable('tableData', 'tableGrid', {showFooter: true});

   var tableTotals = dom.byId('tableTotalsData');
   tableTotals = tableTotals.value;
   tableTotals = JSON.parse(tableTotals);

   var footer = tableGrid.renderRow(tableTotals, {});
   footer = put('div.dgrid-totals', footer);
   put(tableTotals.footerNode, footer);

   tableGrid.resize();

Now we just need a little CSS to make sure the cells in the footer are aligned with the main content, and take the scroll bar into account.

.dgrid-footer .dgrid-totals {
    margin-right: 17px;
}

I wrap the rendered footer row in a div.dgrid-totals element, because it may not be the only footer on the table, especially if you're using pagination instead of on demand.

And there you have it.  Simple table data looking sexy with the rest of your Dojo themed site.

Wednesday, May 23, 2012

Saving HABTM for existing records in CakePHP 2.1

This is more of a note for me, since I was doing it wrong, and I couldn't find a clear example in the CakePHP manuals.

The scenario is saving a hasAndBelongsToMany (HABTM) relationship for existing records, and requiring a uniqueness constraint on the join table.  The solution is to use saveAssociated(). The array to pass as the data should look something like this.

$data = array(
    'Model' => array('id' => 1),
    'AssociatedModel' => array(
        array('associated_model_id' => 1),
        array('associated_model_id' => 2)
    )
);

The AssociatedModel is actually the name of the hasAndBelongsToMany relationship found on the Model model, and will represent the actual join table, which will probably be called models_associated_models.

Friday, April 20, 2012

You Never Go Full Retard


To paraphrase a line from Tropic Thunder, I've gone full retard with my usage of Dojo widgets.

It's not retarded, though. It's one of the proper usages.

Dojo seems to have two ways of doing their widgets. The first, and original way is by putting mark up in the HTML to denote the type and any properties. This is parsed at runtime.

The second way is complete programmatic insertion. Your HTML is a series of div tags with ids, and everything else is created and inserted with Javascript.

Up until now, I've been doing a combo of the two. I'd rather generate regular HTML from my back end framework (CakePHP), and then in the Javascript, identify elements to become Dojo widgets by id. The trouble with that approach is that Dojo is unprepared for it, and doesn't replicate some of the properties on the native element (like name) on to the newly created Dijit HTML.

It'll only be a matter of time before I get sick of adding the Dojo data-dojo-type attributes to my CakePHP form elements, and create a DojoFormHelper to default them for particular types of elements.

Wednesday, February 1, 2012

Censored in a country near you

Censorship, social media, tailored search content and privacy policies shakier that an oppositions family planning policy.

It seems like a large bubble is moving across the land again, and this it's the issue of local law enforcement controlling social media content. Last time it was the Occupy Wall Street movement.

For me, this bubble started growing sometime last year (or was it the year before) with Stephen Conroy and the great Australian firewall. A proposed change to Australian ISPs, requiring that they block a specific black list of URLs, with very little transparency of what that black list actually was.

Then it morphed a little into an unwillingness by some Australian state governor generals to recognise a R18+ rating for games. You'd think Australians would be able to call a spade, a spade. But not if it's a spade of particular quality. Manufacturers have to remove a little of the shine, then push it to an MA15+ market, all the while, uneducated, gormless parents are buying these for little Johnny, who can't be much more than 13. Oh dear, I think I've blurred my analogy. Anyway, I think you get the idea. A game that would otherwise be classed as R18+ has a few graphics touched up, and ends up in the hands of a 13 year old, put there by the parents who should know better. I think it would be a different result if the game was actually recognised and labelled as R18+. But I digress.

More recently, we've had SOPA/PIPA. While they're not about censorship, they are about going the wrong way to control information on the internet. Some of the ideas put forward about blocking which sites should and should not be available for the American viewing public smell a lot like some of the ideas that Conroy had for the great Australia firewall. And if either one of those reasons resulted in an implementation of blocking URLs or whole sites with little review or recourse for action, you can be sure there will be a justification for the other reason soon enough.

Then we have Twitter, once free of censorship, now adjusting policy to cater for local law, and telling us that some censorship is better than no censorship. Well, except for the people actually getting censored, or the intended audience not being reached because of censorship by local law enforcement in that country. The small blessing is that, for those that care, there's an independent site listing the requested take down notices. The details of that escape me, somewhat. But I'm not overly put out by the Twitter stuff. While I have an account, I go through phases of usage, and in more recent times, that usage is way less that Facebook. And Facebook gets a 5 minute look in, perhaps once every two weeks.

I couldn't tell you what's going on with Facebook. Since their more recent UI change, doing what's hot, what's popular and apparently, what's relevant, I get the feeling I'm not seeing all that I should be seeing, and no longer trust Facebook to deliver posts from friends. I know there's a similar thing going on with Google and their searches, in general, but since most of my searching is for programming syntax and documentation, I welcome relevant hits into StackOverflow, any day of the week.

The, the latest act is Google, and in particular, Blogspot and the usage of it's TLDs. Once upon a time (say, last week), this blog was only found through reuben-in-rl.blogspot.com. Now, the TLD will change depending on what country you are viewing from. You can read a bit more on it here. So Americans and their aliens will still see .com, Australians will see .com.au, and other people viewing from other nations will see whatever TLD Blogspot has managed to secure for that country.

So why did that do that? So local law enforcement can block content without impacting what content gets seen from other countries. Pretty similar to what Twitter are doing.

There are a couple of interesting side effects.

The first is a loss of page rank for any engine that just looks at the URL. Any Tweet counts, Facebook Likes and Alex rankings just got wiped for any country outside the US. I'm tempted to put Google's +1 in that basket as well, but Blogspot supplies the canonical relationship in the header of all the pages, which points back to the .com version. I wonder if a +1 gets applied to the viewed URL or to the canonical URL. Usage of canonical leads me to the second point.

The second side effect is the usage of the canonical relationship in the header that points back to the .com version. This means, for most search engines that honor the canonical reference, only the .com version of the website will get indexed. And if local law enforcement in the US decides your content isn't fit for viewing there, then you're pretty much fucked for having it viewed anywhere else. I guess this was always the case in the old system, but it's probably the one inconsistency with the TLD change policy. Your content is good for indexing, until you piss off the US.

So, my original thought that sparked this entry was "how long until distributed peer to peer technologies are used to disseminate blog content into the ether?". Maybe your audience reaches a critical mass, and instead of have a hosted blog, you just publish an atom to a few, well known locations and assorted technologies (RSS, Usenet, Github, Wordpress, Craigs List, Gumtree, IRC logs, free CDNs and a whole bunch of torrent trackers that peddle plain text and a handful of supporting images). Consider what happened when attempts were made to shut down WikiLeaks.

It's not a small world, after all. There are just more assholes cramping your style.

Update: Google will look at the canonical link, if you don't specify a href. Even then, you might read this post about how that's not quite good enough, but it seems Google have updated processing so that even if you +1 an explicit URL, if it contains a canonical link, that will increment counters for other +1 buttons with different URLs, but the same canonical link. TL:DR. The Blogspot TLD change won't bork your +1's.

Monday, January 30, 2012

Refreshing a Dojo DataGrid

Here's a little trick for refreshing the contents of a Dojo DataGrid to the currently selected position. You may wish to do this if your data source is being updated by something else, and you need to force a refresh.

dojo.require("dojo.aspect");
var grid = dijit.byId("myGrid");
grid.store.close();
var handle = dojo.aspect.after(grid, "_onFetchComplete", function() {
    handle.remove();
    this.scrollToRow(this.selection.selectedIndex);
});
grid.sort();

This simplified example closes the store and calls sort() to force a refresh of the data.

In the normal course of processing, Dojo tries to get you back to where you were, but due to the clearing of the data, the scroller height is reduced, and when Dojo tries to set the scrollTop property of the grid div, it remains as it's reset value of 0.  Therefore, we use aspect.after() to set the row after the fetch from the sort has completed.

We don't need this happening every time data is fetched for the grid, so we record the aspect handle, and force the aspect function to remove itself from the chain, once it has been called.  Since the scrollToRow call is likely to fetch more data, we remove the aspect handle before calling it, so we don't have the aspect function called twice.

This was done using Dojo 1.7.1.

Friday, January 27, 2012

Fixed layouts for tables

I'm still fiddling around with Dojo grids. It's the Enhanced Grid in 1.7.1 at the moment, but most of the CSS comes from DataGrid anyway.

I came cross an interesting difference between Chrome, Firefox and IE with regards to how Dojo implement their grids.

To get grids to render quickly, Dojo like you to specify the width columns in the grid structure. This means they don't have to run any tricky rendering calculations, and their arrangement of tables nested in div tags works out nice and quick.

However, Chrome and Firefox have different ideas on how to render what Dojo has done to make use of this quick rendering.

Dojo does the following: Makes each table have a table-layout of fixed, give the table a width of 0 and explicitly specifies the width of each column in the TH and TD tags. That width is the width you specified in the grid structure.

Based on this, I expect the widths I've supplied to be the total widths of the columns. I also use the sum of these widths, plus a bit more for the vertical scroll bar for the node holding the grid.

Chrome does the following: Pretty much as expected, from a "setting up Dojo" point of view. Each column is as wide as I configured. However, from the CSS point of view, it's a bit strange. Dojo puts padding in the cells (5px each side) and there's a border as well (1px all around). So when you look at the Metrics tab in the developer tools, the actual width displayed is less than the what you put it. The box-sizing of the TH and TD elements is content-box. It looks like Chrome as reverse engineered the supplied width to fit with the content-box model. My rationalisation is that Chrome forces the TH and TD elements to be box-sizing: border-box, given a table-layout : fixed, but instead of just saying that, it changes the width to suit box-sizing: content-box. Well, it all looks good in Chrome, so what do I care?

A lot, because my clients aren't using Chrome. They're using Firefox and IE.

Firefox does the following: Completely ignores the width on the table (which is 0), in favour of the widths on the column headings. And then proceeds to render them using box-sizing: content-box. This means all the "exact" column widths I asked for are now increased by the padding and the borders in the column headings.

The specification for table-layout : fixed at W3C is particularly vague when it comes to determining what part column heading widths, and their paddings, should play when determining the total width of the table.

On one hand, Chrome seem to have taken their lead from the second paragraph, and have used the block width algorithm, to determine that a supplied width should be applied as though there was a box-sizing : border-box applied. Even then, that's not quite right, because border-box doesn't include margins, where as the block width does. Lucky for us, TH and TD elements lack a margin to speak of.

On the other hand, Firefox have taken their lead from the first rule of the fixed table layout algorithm, and just use the width property as the width according to the box-sizing: content-box model. And why wouldn't they: it's what it says on the tin.

The work around, to get consistent behaviour across both browsers, is to force the column headings to have a box-sizing: border-box.

Since I'm using Compass/SASS, I can create a mixin to include at the top level of any Dojo grid to fix the problem.

@import "compass/css3";

@mixin dojo-grid {

    .dojoGridRowTable > tbody > tr > th,
    .dojoGridRowTable > tbody > tr > td {
        @include box-sizing(border-box);
    }
}


Luckily, IE9 plays along as well. I'm not sure, and I care less about IE8. Google can get regular updates out for Chrome, regardless of the platform. Firefox is doing it's best to follow suit. I'm inclined not to care much at all for IE if the only way for HTML and CSS bugs fixes to be released is with the next major version of the product (or platform it was designed for).