Tuesday, August 19, 2014

The Inevitable

I received this little gem attached to my toll statement.

In the history of man made disasters, things usually tend to be inevitable, if you don't actually do anything about it.

Queensland Government 2014 : Doing nothing about improving public transport to let the congestion happen.

Friday, June 6, 2014

Source controlling your database, with VS2012 and SSDT

Source control for code is awesome, but when it comes to databases, I've never done it full.  Sure, there's dumps to SQL files, and extracts, but it all seems just a bit too manual.

So, I've decided to try my luck at a proper database project for my MS SQL database in Visual Studio 2012.  The MS SQL Server flavour is 2008 R2.

The current situation is 3 databases that I need to apply changes to, at various times.

My first database is the development database, located on a shared server, and connected via Windows Authentication.

The second database is the test database, located on the test server. Whilst I can connect to the machine via a VPN, the database still has Windows Authentication, and physically connecting to the database from my development machine is not an option.

And the third database is the production database. Also using Windows Authentication, and unreachable from my desktop, in any database connectivity sense.

At the time of deciding to have this database project, I was near the end of the development cycle, so pretty much all database changes I wanted to make were already applied.  Though in planning for a new release, I always take a copy of the database from the previous release and rename it for the new release.  So I had an older version to work with.

So I created a new SQL Server Database Project.  Then I imported from the older database.  At this point, I committed the project source to Git, so show the state of the database before all my current release changes have been applied.

Then I performed a Schema Compare with the new release, making sure to set the new database as the source, and the project as the target.  I then did an Update, and committed that to Git as well.  Nothing unusual so far.

The next challenge is to work out what changes I should require to upgrade the test database.  Since the database project also imports security objects, such as logins, I want to make sure none of my development logins make it to the test and production databases.  The database project tools and doco seem geared for when you can connect to the database to do a Publish, but it's not an option in this case.

The next best thing is to jump on the test system, and generate a .dacpac file from the test database, and bring that back to my development machine.  From there, I can (hopefully) perform a Schema Compare and generate an SQL file that I can run on the the test server to get it up to speed.

But no.  That would seem to not be an option for me at the moment.  When I generate a dacpac file from SSMS 2008 R2, and try to use this in a Schema Compare, I get an Origin.xml file missing error. This post suggests that MS might fix the issue, but also says that you need to upgrade to SSMS 2012 to be able to generate v3 of the dacpac, the only version that will work as intended.

Unfortunately, my test system is only running 2008 R2 on Window 2008 Datacenter, and SSMS 2012 requires Windows Server 2008 Datacenter SP1.  On the original post, I have quizzed the answer that suggested a fix to see if they were going to fix SSDT to handle older dacpacs that don't have an Origin.xml, or the less convenient option of forcing everyone to patch their OS being being able to install the latest SSMS.

So, for now, I will just do a Schema Compare between the older release database, the current database, and generate an SQL file that can be run on the test server and production server.  In my manually maintained SQL upgrade scripts, I have some alter statements that introduce not null fields with no default, and also have code to populate those fields.  I think I may have to run my manual scripts anyway, but at least I can extract the Stored Procedure code from the generated SQL to upgrade all of those, instead of manually visiting each one, doing a modify, copy and paste, and making sure permissions are appropriately applied.

Thursday, May 8, 2014

Moving House Sucks

Sometime at the start of 2013, we decided that the house we lived in was no longer suiting our needs, and that we would need to find a bigger place.  So we starting fixing the house up for prepare for sale.

In mid 2013, we still had quite a lot of work to complete on the house, before we felt it would be ready to put on the market and fetch the best price.  My wife also entered the work force, but was working on the other side of the city.  Driving late at night, on the highway, with the young'un was inviting disaster.

So after a year of work on the house, including painting and landscaping, we were finally ready to sell.  All the stuff we really didn't need for a while went into portable storage.  In December 2013, the house went on sale, and a good offer was made from that first weekend.  So we took it, and settled in mid January 2014.

Then the hunt was on to find a place on the north side.  When we settled, we moved. We moved to the in laws.  Not ideal, but was better that finding a short rental.

After a few months of searching, we finally found a place at the right price.  The house was actually one of the first places we saw in January, however the price was out of our league.  We had notices that the estate agents were putting the prices at $50K over what we'd other consider.  How could we tell? Because we'd look at the place, then after two or three weeks of being on the market, it would drop by $50K.  Same too, with the place we ended up getting.  It actually dropped $65K before getting to a price where we could put an offer on it.  Then after negotiations, it worked out that the price was $50K below the original asking price.

Anyway, here we are.  All moved in, but still moving in.  I probably won't consider us fully moved in until there are no more boxes to be unpacked, and both cars can fit in the double garage.  Once that's done, then we can also finishing painting the inside of the house.  It's just gloss enamel skirtings and a top coat for the tiled area, but it feels like "too hard" basket, at the moment, especially with the dog and toddler, who are unlikely to let me get on with it while they're awake.  At least I can put one of them outside or in the sleeping crate.  What to do with the dog? What to do, what to do...

Over the course of the last year, with the house selling and the house buying and the raising of the toddler, this blog has been neglected.  Yep, it's kind of one of those posts.  Not the post were I promising to blog more frequently, just the "hey, still here doing stuff" kind.

We also had a little family loss last year.  Our cockatiel, Missy Moo, or Moo Moo, as we would call her, developed a heart condition that was severely impacting her quality of life.  We had to put her down. It was quite sad. Addison still remembers Moo Moo, and has taken a special liking to the cockatiel at her day care.  She's one of the few, if not the only, toddler that will let the cockatiel sit on her shoulder while she walks around and pets it.

We also had a bit of a scare with the dog soon after.  A few scans and a very expensive vet bill later, the cardiologist determined that Astro had a diaphragm flutter that would present itself in situations of stress.  And that stress was the stress of moving, and living with in laws.  Now he has a doggy door, so he can let himself in and out as he pleases, and he hasn't vomited for quite some time.

So, that's been the year. Back later.

Monday, October 28, 2013

Application Components

It's been a busy year with our little girl, Addison, getting the house ready to put on the market and work projects.  Lo and behold, almost a year since my last blog.
This entry, more of an entry for me than anything else, is a record of components used for my last web application.  The application is a single page application used for searching call data records that have been imported from an archive of voice recordings and accompanying CDRs.
The components used in this project are :

For this project, the client has elected to deploy on IIS in a Windows Server 2008 R2 machine, but I had it running with Apache 2.4.6 in my dev environment, and trialed Microsoft WebMatrix for attempting to deploy a PHP project to my local IIS installation.  Unfortunately, WebMatrix won't deploy to localhost, and I couldn't get it to deploy to the named instanced via a specific IP address.  That's too bad, because I normally have no issues deploying C# projects via Web Deploy to my local IIS installation.  But I could run the site directly, using IIS Express, so at least I could test the URL Rewrite rules in IIS.
I decided to go with SQLite, because there are only two tables in the application, and I didn't want to have to set up MySQL or something heavier on the production machine.  The only downside with SQLite is finding a client GUI.  There were instructions for setting up Eclipse with a JDBC driver or a plugin, but the plugin was no longer in active development, and seemed to be served for end user Dropbox stores, and the JDBC drivers were old, and required JNI and compilation.  In the end, I used a Firefox addon for viewing and manipulating the database.
I did have goals of integrating jQuery Mobile, but there was a conflict with the Timepicker addon.  JQM and JQUI both have slider components that occupy the same namespace.  Unfortunately, they both work slightly different as well.  the JQUI slide providers a generic slider widget, where are the JQM slider gets applied to an input text field, and then provides a widget beside that input box. I was getting too much benefit from the Timepicker addon to dump it in favour of JQM, which doesn't offer a native integrated date and time picker.  According to the jQuery UI Road map, there will be integration of the jQuery UI slider with jQuery Mobile in jQuery UI 1.12.
Ever keen to find minimize the number of components used in a project, I also had a look around for a jQuery UI native grid. In Feb 2011, the jQuery UI Team announced they would be working on a Grid component for jQuery UI. In Oct 2011, there was another post, with the update to what was happening.  This post announced that the road map would see the grid released with jQuery UI 2.1.  jQuery UI 2.1 isn't actually on the roadmap yet, but I'm going to hazard a guess that it's going to be another 2 or 3 years at least, before it sees the light of day.  This is based on the development of the jQuery UI Selectmenu component that has been in development at least since Feb 2010, and will be getting released with jQuery UI 1.11 (currently in beta).  
In the meantime, I would need to use something else.  Since I has elected to go with jQuery, it only makes sense that I stick to something jQuery based.  I've already been bitten by using the Dojo Enhanced Grid (experimental) in Dojo 1.7.4, only to have it dropped, and the prospect of having to upgrade to 1.8 was too scary to contemplate. Especially when the newer Grid component did not support all the functionality that the Enhanced Grid did (looking at you, multi-row per entry feature).  
I've decided to try DataTables.  So far, so good.  Because it's being used to display search results, rather than a report, I'm electing to count the total rows in a search, but only return the first 500, and suggest to the user that they refine their search.  This way, I get to make use of the client side search and sorting capabilities, without having to implement ajax calls for dynamic loading.
The application has only been written for a desktop browser, but it would be nice to provide optimizations for tablet and mobile support.  This would require a little more investigation into the Unsemantic CSS Framework, and it's responsive design capabilities, and the prospect of having reflow for DataTables (for mobile views). I have a feeling that the later may have to wait until sliders are no longer an issue, and to see if DataTables and JQM reflow tables can play nicely, or waiting for the JQUI Grid, that might have JQM reflow features when it gets released.
One of the features for the application was to download the archived audio file associated with a search result.  This audio file is stored in the original TAR file that was indexed.  To be able to download it nicely, within the context of the CakePHP framework, I had to use PharData to extract the audio file from the TAR, and pass it to CakeResponse::file().  I also want to perform some sort of file clean up for the extract file, but this could not be done on a case by cases basis by the same process that served the file. I had one suggestion from a forum to detect when the download was completed, and fire an Ajax call on complete to delete the file.  Giving the an API to delete files like this (even if it was obfuscated via an id), didn't sit well with me.  Mostly because relying on the client to do issue a command to do cleanup on the server was too much of a gamble.  So I elected to have a serving process delete all extracted files that were older than a certain time.  My starting age date is 5 minutes, though this could probably some down to 3 minutes or less, given that it shouldn't take more than that to download the audio files on a local network.
I hope there was some interesting information in there.  I'll be looking forward to the development, and documentation of my next application, to see the progression of web application components used.

Monday, October 29, 2012

jQuery UI selectmenu and the change event

Many moons ago, the Filament Group created a jQuery UI component called Selectmenu.

Then Felix Nagel forked it, and made improvements!

If you're using jQuery UI 1.8.X or older, look at the links from Felix Nagels forked version.  If you're using jQuery UI 1.10.X, it should be available officially within jQuery UI.  However, at the time of writing, jQuery UI 1.9 is what is in the wild, so you have to cherry pick from Github, and not quite get the full benefit of Themeroller.

Selectmenu has a few ways of doing things that are a bit counter intuitive to the way you'd expect things to be done.  This is mostly because it is made up of existing components.

At some point, to set the widget to an existing value in the select list, you needed to do this

$("#myselect").selectmenu("value", value);

However, these days you can get away with


And if you've altered the options on the select list, you may also want to follow up with


One of the counter intuitive actions about the component is handling of the change event.  With a native select element, you can do any of the following, depending on your jQuery version and preference:

$("#myselect").change(function() {});
$("#myselect").bind("change", function() {});
$("#myselect").on("change", function() {});

But given the Selectmenu component, as it stands, you need to do this:

$("#myselect").selectmenu({change: function() {}});

But this doesn't allow for event propagation.  So here's what I've started doing to allow event propagation, whenever I set up a Selectmenu instance.

var changeSelectMenu = function(event, item) {
    $(this).trigger('change', item);

And now I can use any of the previous methods to implement change handling, and have multiple handlers for the event.

Caveat: I'm a scoping noob. Usage of $(this).trigger() may be bad. My intention is that it should target the element that the component is attached to.  Perhaps I should even be using the event to get the event name, rather than hard code 'change', and then I could call the function variable propagateEvent.  Maybe there is even a better way to propagate the original event, rather than start a new one.

I would rather that the component set this behaviour up by default, but there's probably something intrinsically wrong about it that made it seem like a bad idea.

Monday, September 17, 2012

Bye Bye, IE8

On one of my rare visits to Twitter, I did spot a fantastic bit of news.  With the release of IE10 on 26/10/2012, Google Apps will be dropping support for IE8 from 15/11/2012.

So where does that leave the various operating systems that run a version of Internet Explorer?

For Windows XP SP3, support for your operating system ends on April 8, 2014.  If you were running the 64 bit version of Windows XP, then the latest release was SP2, and support for that ended on July 13, 2010.  You can't upgrade to IE9, so your best bet for a modern standards compliant browser is Chrome or Firefox.

For Windows Vista, it's a bit trickier.  Current documentation generally states April 11, 2012.  But support for SP2 ends 24 months after the release or at the end of the products support lifecycle, whichever comes first.  Considering that SP2 was released on April 29, 2009, the support should have ended April 29, 2011. Either way, you can still use IE9 to get some HTML5 functionality, but if you're going to hold on to your Vista release for as long as some folks are holding on to their XP releases, you also might consider installing Chrome or Firefox.

For Windows 7, mainstream support is expected until January 13, 2015 and extended support ends January 14, 2020.  You'll also be able to upgrade to IE10, but frankly, why should you have to wait until a major release of a browser that is really aimed at another operating system, before you can get access to the features of a modern standards compliant browser.  That's right, get Chrome or Firefox anyway.

For Windows 8 users, I guess you're running the Release Preview at the moment, and will be able to get the full retail version on October 26, 2012, the same day that IE10 is released.  If you're sticking to IE, then you can expect two major upgrades to your web browser with the next two releases of Windows at approximately 3 year intervals. Or, you can start using Chrome or Firefox now, enable the sync feature and have extensions install automatically to your Windows 8 installation of the browser.

I haven't covered older releases of operating systems or servers here, but things are getting pretty grim if you're still running Windows 2000, or Windows Server 2003.  I don't think Chrome is supported for those operating systems, and details for the latest release of Firefox are hard to find. Firefox 14 is supported for Windows Server 2003, but it's not the latest release at the time of writing.  The link provided is for Firefox 14, at the time of writing.

Personally, I'm sticking with Chrome.  It's proven to be pretty good for my development needs, but I always keep an installation of IE and Firefox around.  I should also keep an install of Safari for Windows around as well, for times when I want to run different logins to the same development site at the same time, and maintain the consistency of WebKit rendering.  I'll probably upgrade my IE9 install to IE10, and hope that it maintains the render as features that IE9 does (or I'll check someone elses IE10 install first).
Update: Went looking for Safari for Windows, but it seems that Apple pulled support and all but one link for Safari 5.1.7 for Windows. Maybe I'll use Opera instead.  When I need to.

Friday, July 20, 2012

In defense of the Go card

In Brisbane, the local transport authority has issued a card that allows people to use the public transport system without having to tender cash at stations and bus stops, and allows swift entry and exits from train stations, buses and ferries without the need for a manual ticket inspection. It's been around for a few years.

In Brisbane, it's called a Go card. In London, it's called an Oyster card. Other places around the world have probably adopted something similar.

The usage is fairly simple. When you arrive at a train station, or board a bus or ferry, you touch the card against a round disc called a Go card reader. It tells you how much money you have left on the card. Before you leave a train station, or leave a bus or ferry, you touch the card again. It will tell you how much the fare was, and how much money you have left on the card. With the bus system, you have to wait until the onboard computer is sync'd with the central system before you can see your journeys online.

The tech isn't without it's faults. Typically, I catch the train into work, but a few weeks ago I decided to catch the bus. The bus is closer to my house, but counts for three zones of travel, which attracts a price of $4.24 each way. When I catch the train, I drive for 15 minutes or so, and catch a train that is only two zones and costs $3.58.

Anyway, one morning on the way to work, the ticketing computer on the bus decided to continually reboot. This meant that anyone already on the bus was unable to swipe off on the way off the bus, and anyone getting on would be traveling for free. The side effect is, if you don't swipe off, your fare is not calculated until your next journey, and is usually considerably more than what the fare normally would have been. The bus driver mentioned it might be $10. It's easily fixable, since you can call up the helpline on the card, and get the fare adjusted. Looking at the online records, it turned out I got charged $5, instead of the normal $4.24. I wasn't too worried about that difference, so I left it.

But back to the main story.

The number of people leave the train platform at Central Station was a lot. Enough to make me impatient, anyway. So when the four people in front of me all have troubles with their cards swiping on the gates, I start to get a bit pissed off. They all had their cards tucked away in their wallets, and were just swiping their wallets.

The recommended usage is to remove the card from the wallet, and tough it to the reader. With this method, you remove the chance for interference from any other RFID devices you may have in your wallet, like your credit card.

After all, many credit cards and debit cards now have a swipe to use function. When you use those, you're careful to only swipe the card you want charged, not just waving your wallet at the device and hoping the device picks the right one.

So when the lady in front of me also has an issue with hers, and then snipes at the attendant, "Why don't these cards ever work?", I vocalize my displeasure with a retort, "Because you're supposed to remove the card from the wallet before swiping". The attendant indicates an agreement, but it's too late, we're already through with a second swipe from her, and a working first time swipe from me.

"But it always works", she remarks as we start down the crowded stairs. "Evidently not", I reply. My bloody is boiling at that point. The logical inconsistencies of her two previous statements have face rolled by buttons, but it looks like she's about the take the left exit in the underground tunnel, and I'm going to take the lesser used right exit.

"The man before me had his card in his wallet", is the last thing I hear her say as she makes her left turn, and I peal off to my right. The Oyster card I had when I was in the UK worked just fine in a single card holder issued by Oyster themselves. I've seen similar holders issued for Go cards, but didn't happen to get one when I picked my Go card up.

Thinking on the engagement as I walked to the elevators, I found it most odd that I was defending the Go card, a convenient payment mechanism for what is touted to be the third most expensive public transport system in the world.

If you're a Brisbane resident and use public transport, or are trying to weigh up if public transport is worth it, BrizCommuter is recommended reading.