Project Death March
For this month’s #tsql2sday, Jeff Mlakar asked for people to share a story about a project you worked on or were impacted by that went horribly wrong. (Jeff’s Invitation)
One such story stands out from early on in my career but wasn’t immediately obvious when thinking about this topic as it was the most work related fun I’d had at the time and actually ended up solidifying my career choice and passion for problem solving.
I’ll set the scene…
I’m working in an MI development team of 4 SQL/VBA developers and I’ve only been using SQL for a little over a year so still very much a novice. At this point the main database (read:only) platform in the business is Microsoft Access, or to be clear, there’s approx 20 of them with most hitting the 2gb limit regularly, so my VBA skills were probably far beyond my SQL at this point.
The business has just underwent a sort of spin off from it’s parent company so there was a lot of change afoot, new desktops being rolled out across the call centre floor, consultants in to help migrate data to new infrastructure etc etc. There was a lot going on and we were firmly on the user side of this particular fence. Our team really inherited these Access databases through a need to report on the data within them so we were never a priority or part of the migration planning beyond a requirement to document our “estate”. It became obvious early in the project, with months to go that no one really understood how critical these unstable vba application/databases really were and more importantly, the sheer number of them. The business relied heavily on at least 3 of them to perform sales lead recording, fraud prevention and to process bulk account corrections. Downtime was not an option. We knew they weren’t scoping out the sheer volume and complexity of these Access DBs but we were told time and time again that we just had to prep them and the rest would be handled by the migration project team!
Queue the go-live weekend…
It’s Friday afternoon, we’ve kicked all the users out of the databases and they’re ready to go. We’ve done our bit and prepared the databases, their related processes and notifying their users . We’ve got a list of changes we need to make to file paths on Monday morning etc but nothing serious, job done, right?
As 5pm approaches there’s a lot of movement in the meeting room set aside for the migration team and information starts leaking out that no one thought to check data transfer times and the estimated duration was spilling into the following Tuesday! This would impact several of our business critical (yes I know!) Access databases and the processes that hang off them to prepare and distribute data. It was a show stopper!
This was the point where we cracked out our white board and started trying to put together plans to get round this problem and make sure the key teams that relied on these enormous, unstable databases, had them on Monday morning. This involved some temporary storage, a login script being executed to re-point desktop shortcuts and some complex re-merging of data once the databases were migrated successfully. We stayed to after 10, through pizza, coffee and lots of failed ideas to get the plan put together and tested out. It was so much fun, working through the different options and discussing roadblocks etc as a team of peers.
The best part of it was that we took this to the project manager, who really didn’t know who we were and ended up solving the problem. The bulk of the data being migrated was low priority so this could be moved over days while we got business critical databases over and re-pointed the less critical DBs until they could be migrated the following weekend. It was a messy solution but we came up with it and had fun stepping up.
I don’t wish for these scenarios but I really enjoy getting stuck in to a good problem with a looming deadline.