Upcoming Presentations

Just a quick note to let you know about a few upcoming presentations I’ll be doing. This week I’ll be presenting remotely for the Washington DC PASS User Group, PASSDC (event link). I’ll be presenting “Do More With Less: Consolidate and Virtualize“. Since all the stuff with Prism came up recently, and I’m presenting for a group in DC, I think I’ll be modifying my deck to have some “fun” with it. 🙂

Next week I’ll also be presenting in person for the Tampa (Pinellas) SQL User Group (event link) where I’ll be presenting my “What is a BI DBA?” talk.

Also, I’m proud to say I’ll be presenting at this year’s PASS Summit conference in Charlotte! I’m honored to have secured a Spotlight (90-minute) session for the conference where I’ll be doing an extended version of “What is a BI DBA?” presentation.

Internet Security Advice For The Family

Alistor Moody says
Constant Vigilance!

For today’s post I’m channeling my inner Brian Kelley (Blog | Twitter) and talking about security, in particular Internet security. The post today is actually an email exchange between myself and my younger sister. She asked me for some general advice and I wrote a long response and figured I’d share it with you readers (which maybe you can in turn use to send to your own family seeking similar advice).

Continue reading “Internet Security Advice For The Family”

Business Objects on Linux and SQL Server

This is just a quick post to share a lesson learned while I was on an engagement where the client’s reporting environment was using SAP Business Objects (BO)running on Linux for reporting. We were doing a test to move the underlying data warehouse from another database platform to SQL Server 2008 R2.

As we changed connections over, however, we quickly ran into a roadblock. It seems when we tried to make a connection to SQL Server via BO we got the error of ‘Unable to bind to Configuration Objects WIS 10901’. What made this situation strange is that from a Windows box you could connect but from Linux itself, it wasn’t having any of it. After some digging around we found we needed a third party ODBC driver to make this connection work.

It was suggested to us by the folks at SAP that we use a third party driver for ODBC connectivity. We were pointed to drivers by DataDirect (NOTE: This is not an endorsement for said product, this is simply the solution we tried and went with. There are several third party vendors that offer Linux ODBC connectivity so please evaluate and choose what works for your environment). Have you run in to this issue before? How’d you handle it? Feel free to share your solutions in comments.

Speaking at PASS Summit 2013!

I’m happy to officially announce that I’ll be presenting at this year’s PASS Summit in Charlotte, NC! I’ll be presenting my talk “What is a BI DBA?” as a Spotlight Session (90 minutes). This is the second time I’ll have presented at the Summit and I’m honored and beyond excited to have a Spotlight Session so we can cover more material!

This year the conference runs from October 15-18. I hope to see you at the Summit this year! http://www.sqlpass.org/summit/2013/

Magnify SQL Text with SSMS 2012

This is just a quick tip to help with folks who present SQL code at events such as SQL Saturday. While most presenters use tools like ZoomIt (which if you present, please please learn to use this wonderful, free tool) sometimes it can get nauseating for attendees to watch you constantly zooming in and out, especially on code.

A quick way around this is by using the magnification feature in SQL Server Management Studio 2012. To do this simply hold down the Ctrl button on your keyboard and with your mouse scroll the mouse wheel up to increase the magnification and scroll down to decrease it.  Alternatively you can simply click on the magnification dropdown, which is located at the bottom left of the query window (by default) and select your desired level of magnification.

That’s it! Now you can quickly magnify your code to make it easier for your audience to see and you can reserve the zooming to highlight other areas as needed.

SQLSaturday #209 – Rochester

Welcome to the Roc!
Welcome to the Roc!

Just a quick note that I’ll be presenting this weekend, Saturday May 11th, at SQLSaturday in Rochester, NY. Exciting to return to Central New York as I graduated right down the road at Syracuse University. Go Orange!​If you’re in the area come on down, should be a great day of training. Check out the day’s schedule (link), I’ll be presenting two sessions this weekend:


You can follow the event along on Twitter using the #sqlsatroc hashtag as well. See you this weekend in the Roc!


24 Hours of PASS Fall 2012

Do you like training? Do you like FREE training? Then make sure you check out the 24 Hours of PASS event that starts today at 12:00 GMT. If you miss the events of the day, no worries, the recordings will be up within a few weeks.

Check out the great schedule of events:

Thursday, September 20, 2012

Session 01 (BIA) – Starts at 12:00 GMT
Choosing the Right Reporting Platform
Presenter: Brian Knight, Devin Knight

Session 02 (DBA) – Starts at 13:00 GMT
Best Practices for Upgrading to SQL Server 2012
Presenters: Robert Davis

Session 03 (AppDev) – Starts at 14:00 GMT
Three Ways to Identify Slow Running Queries
Presenter: Grant Fritchey

Session 04 (AppDev) – Starts at 15:00 GMT
Fasten Your Seatbelt – Troubleshooting the Most Difficult SQL Server Problems
Presenter: Klaus Aschenbrenner

Session 05 (CLD) – Starts at 16:00 GMT
SQL Server Private Cloud != Azure
Presenter: Allan Hirt, Ben DeBow

Session 06 (AppDev) – Starts at 17:00 GMT
What are the Largest SQL Server Projects in the World?
Presenter: Kevin Cox

Session 07 (AppDev) – Starts at 18:00 GMT
Practical Demos of Text Mining and Data Mining using SQL Server 2012
Presenter: Mark Tabladillo

Session 08 (DBA) – Starts at 19:00 GMT
PowerShell 101 for the SQL Server DBA
Presenters: Allen White

Session 09 (BID) – Starts at 20:00 GMT
Mobile Business Intelligence
Presenter: Jen Underwood

Session 10 (BID) – Starts at 21:00 GMT
Slow MDX Queries: The Case of the Empty Tuples
Presenter: Stacia Misner

Session 11 (DBA) – Starts at 22:00 GMT
Using SQL Server 2012 Always On
Presenters: Denny Cherry

Session 12 (PD) – Starts at 23:00 GMT
Leadership – Winning Influence in IT Teams
Presenter: Kevin Kline

Friday, September 21, 2012

Session 13 (BIA) – Starts at 00:00 GMT
BI Architecture With SQL 2012 & SharePoint 2010
Presenter: Rod Colledge

Session 14 (DBA) – Starts at 01:00 GMT
DBCC, Statistics, and You
Presenters: Erin Stellato

Session 15 (BIA) – Starts at 02:00 GMT
SSIS Design Patterns for Fun and Profit
Presenter: Jessica Moss, Michelle Ufford

Session 16 (AppDev) – Starts at 03:00 GMT
Characteristics of a Great Relational Database
Presenter: Louis Davidson

Session 17 (BIA) – Starts at 04:00 GMT
What’s All the Buzz about Hadoop and Hive?
Presenter: Cindy Gross

Session 18 (AppDev) – Starts at 05:00 GMT
Taking SQL Server Into the Beyond Relational Realm
Presenter: Michael Rys

Session 19 (BIA) – Starts at 06:00 GMT
Agile Data Warehousing with SQL Server 2012
Presenter: Davide Mauri

Session 20 (AppDev) – Starts at 07:00 GMT
Digging Into the Plan Cache
Presenter: Jason Strate

Session 21 (BIA) – Starts at 08:00 GMT
Introduction to Microsoft’s Big Data Platform and Hadoop Primer
Presenter: Denny Lee

Session 22 (BID) – Starts at 09:00 GMT
Big Data Analytics with PowerPivot and Power View
Presenter: Peter Myers

Session 23 (CLD) – Starts at 10:00 GMT
Best Practices and Lessons Learned Using SSIS for Large Scale Azure Data Movement
Presenter: Steven Howard

Session 24 (PD) – Starts at 11:00 GMT
Mentoring for Professional Development
Presenter: Andy Warren

* Please be sure to check our time zone guide for exact times in your area. This 24 Hours of PASS event begins at 12:00 GMT on September 20 and runs for 24 consecutive hours.

How To Prevent SELECT * The Evil Way

SELECT * FROM…when administrators see that from developer code, we generally tend to cringe. Why? In a nutshell, it’s terrible on a few fronts. First, typically that SELECT * FROM that gets written (a lot of the times) lacks a WHERE clause. What’s the problem? Well, that pulls back every single row from the table.

Sure, that may not be too bad on a table with a few hundred rows but what about on a table with millions? That could cause a lot of performance problems since you’re trying to read all the data off disk (again, potentially).  Secondly, do you really need all those rows and/or columns? It’s a waste of time and resources to pull back every column from a table if your application is only going to be using a few of them anyways.

So how do we prevent this? Well I recently learned an extremely evil way of preventing such a query. I’d like to start off with, this was NOT my idea. I learned this trick from Adam Jorgensen (Blog | Twitter). I’d also like to add this disclaimer:

DO NOT JUST GO DO THIS IN YOUR PRODUCTION ENVIRONMENT! I am not responsible for whatever evil you turn loose upon your environment. Always test things out in a development environment first and get proper approvals before making any changes.

Pure Evil Method

This method is actually evil in its simplicity. What we’ll be doing is adding a new column to the existing table. The “trick” is that this will be a computed column whose formula will cause an error, specifically a divide by zero error. As shown in screenshot below, create the new column on the table and call it something obvious like ‘DoNotSelectAll’. In the Column Properties window, under the Table Designer section, there is a property called Computed Colum Specification. In the formula section, enter (1/0). Save your table.


Now if I try to do my SELECT * on this table, I’ll get this lovely message:


Alright, we learned our lesson, now we’ll explicitly name our columns that we need:


Now this last query worked but notice how I didn’t put a WHERE clause so it pulls back all rows anyways? Yup, your users can still pull back everything, but at least they’re not doing a SELECT *. Also keep in mind, if you’re used to right-clicking that table in SSMS and selecting TOP 1000, with this column in place it will error for you as well. What are your alternative options?



Another way to control this kind of behavior is by not letting users hit base tables at all. You could create Views that have queries in them that limit rows returned. This way a user can do a SELECT * on a view, but the underlying code of the view itself is limiting row returns.

Depending on your situation, this could work and it could not. If the user needed very specific data returned that wasn’t in that limited pool of results could adversely affect whatever process they’re using the data for.

Another option is wrapping everything in stored procedures and granting users access to executing stored procedures rather than querying tables and views. On the one hand, could be good since you’re encapsulating the code. Users can pass parameters to stored procedures so you could make the queries somewhat dynamic.


In SQL Server 2008 they introduced a feature called the Resource Governor. This feature allows you to throttle resources on queries based on custom functions and groupings you specify. Yes, it’s an Enterprise Edition feature but it can be well worth it if you’re having resource-related issues due to runaway queries.

Now this feature will NOT prevent users from doing SELECT * –type queries, however you can throttle how much resource is allocated toward a query so you can at least control how badly it’ll affect you.


My friend Brian Kelley (Blog | Twitter) will probably appreciate this one. Be stringent with the accesses you grant! Grant users only the accesses they need. Also, ff you don’t want users banging against your transactional systems directly, think about setting up a dedicated/isolated reporting environment and point the users there instead.

The reporting box you stand up doesn’t have to be (necessarily) as beefy as your transactional system and you can setup customized security on that database. This is especially helpful for when the transactional system is a vendor application which you can’t make any modifications to the code.

Do you have any other suggestions/tricks to help prevent crazy user queries? Let’s hear it in the comments!

Identify and Alert for Long-Running Agent Jobs

Being a DBA is like being a train conductor. One of the biggest responsibilities is making sure all jobs are running as expected, or making sure “all the trains are running on time” so to speak. As my partner-in-crime Devin Knight (Blog | Twitter) posted earlier, we have come up with a solution to identify and alert for when SQL Agent jobs are running longer than expected.

The need for this solution came from the fact that despite my having alerts for failed agent jobs, we had a process pull a Palin and went rogue on us. The job was supposed to process a cube but since it never failed, we (admins) weren’t notified. The only way we got notified was when a user finally alerted us and said “the cube hasn’t been updated in a couple days, what’s up?”. Sad trombone.

As Devin mentioned in his post the code/solution below is very much a version 1 product so if you have any modifications/suggestions then have at it. We’ve documented in-line so you can figure out what the code is doing. Some caveats here:

  • This solution has been tested/validated on SQL Server 2005 (SP4) and 2008 R2 (SP1).
  • Code requires a table to be created in a database. I’ve setup a DBAdmin database on all servers here for custom scripts for DBAs such as this, Brent Ozar’s Blitz script, Ola Hallengren’s maintenance solution, Adam Machanic’s sp_whoisactive, etc. You can use any database you’d like to keep your scripts in but just be aware of the USE statement at top of this particular code
  • This solution requires that you have Database Mail setup/configured
  • To setup this solution, create an Agent job that runs ever few minutes (we’re using 5) to call this stored procedure
  • FYI, I set the mail profile name to be the same as the server name. One – makes it easy for me to standardize naming conventions across servers. Two – Lets me be lazy and code stuff like I did in the line setting the mail profile name. If your mail profile is set differently, make sure you correct it there.
  • Thresholds – This is documented in code but I’m calling it out anyways. We’ve set it up so that any job whose average runtime is less than 5 minutes, the threshold is average runtime + 10 minutes (e.g. Job runs average of 2 minutes would have an alert threshold of 12 minutes). Anything beyond a 5 minute average runtime is controlled by variable value, with default value of 150% of average runtime. For example, a job that averages 10 minute runtime would have an alert threshold of 15 minutes.
  • If a job triggers an alert, that information is inserted into a table. Subsequent runs of the stored procedure then check the table to see if the alert has already been reported. We did this to avoid having admins emailed every subsequent run of the stored procedure.

CODE (WARNING: This code is currently beta and subject to change as we improve it)

Last script update: 7/24/2012

Change log:

7/12/2012 – Updated code to deal with “phantom” jobs that weren’t really running. Improved logic to handle this. Beware, uses undocumented stored procedure xp_sqlagent_enum_jobs

7/24/2012 – Updated code to v 1.16 to deal with email alerts error. Removing code from blog post and asking folks to instead download directly from download link below. Formatted code on blog makes it messy and a pain when updating.

Download script link – Click here

Got any feedback/comments/criticisms? Let me hear them in the comments!