SQLChicken.com

SQL Server DBA Tips & Tricks

By

24 Hours of PASS Fall 2012

Do you like training? Do you like FREE training? Then make sure you check out the 24 Hours of PASS event that starts today at 12:00 GMT. If you miss the events of the day, no worries, the recordings will be up within a few weeks.

Check out the great schedule of events:

Thursday, September 20, 2012

Session 01 (BIA) – Starts at 12:00 GMT
Choosing the Right Reporting Platform
Presenter: Brian Knight, Devin Knight

Session 02 (DBA) – Starts at 13:00 GMT
Best Practices for Upgrading to SQL Server 2012
Presenters: Robert Davis

Session 03 (AppDev) – Starts at 14:00 GMT
Three Ways to Identify Slow Running Queries
Presenter: Grant Fritchey

Session 04 (AppDev) – Starts at 15:00 GMT
Fasten Your Seatbelt – Troubleshooting the Most Difficult SQL Server Problems
Presenter: Klaus Aschenbrenner

Session 05 (CLD) – Starts at 16:00 GMT
SQL Server Private Cloud != Azure
Presenter: Allan Hirt, Ben DeBow

Session 06 (AppDev) – Starts at 17:00 GMT
What are the Largest SQL Server Projects in the World?
Presenter: Kevin Cox

Session 07 (AppDev) – Starts at 18:00 GMT
Practical Demos of Text Mining and Data Mining using SQL Server 2012
Presenter: Mark Tabladillo

Session 08 (DBA) – Starts at 19:00 GMT
PowerShell 101 for the SQL Server DBA
Presenters: Allen White

Session 09 (BID) – Starts at 20:00 GMT
Mobile Business Intelligence
Presenter: Jen Underwood

Session 10 (BID) – Starts at 21:00 GMT
Slow MDX Queries: The Case of the Empty Tuples
Presenter: Stacia Misner

Session 11 (DBA) – Starts at 22:00 GMT
Using SQL Server 2012 Always On
Presenters: Denny Cherry

Session 12 (PD) – Starts at 23:00 GMT
Leadership – Winning Influence in IT Teams
Presenter: Kevin Kline

Friday, September 21, 2012

Session 13 (BIA) – Starts at 00:00 GMT
BI Architecture With SQL 2012 & SharePoint 2010
Presenter: Rod Colledge

Session 14 (DBA) – Starts at 01:00 GMT
DBCC, Statistics, and You
Presenters: Erin Stellato

Session 15 (BIA) – Starts at 02:00 GMT
SSIS Design Patterns for Fun and Profit
Presenter: Jessica Moss, Michelle Ufford

Session 16 (AppDev) – Starts at 03:00 GMT
Characteristics of a Great Relational Database
Presenter: Louis Davidson

Session 17 (BIA) – Starts at 04:00 GMT
What’s All the Buzz about Hadoop and Hive?
Presenter: Cindy Gross

Session 18 (AppDev) – Starts at 05:00 GMT
Taking SQL Server Into the Beyond Relational Realm
Presenter: Michael Rys

Session 19 (BIA) – Starts at 06:00 GMT
Agile Data Warehousing with SQL Server 2012
Presenter: Davide Mauri

Session 20 (AppDev) – Starts at 07:00 GMT
Digging Into the Plan Cache
Presenter: Jason Strate

Session 21 (BIA) – Starts at 08:00 GMT
Introduction to Microsoft’s Big Data Platform and Hadoop Primer
Presenter: Denny Lee

Session 22 (BID) – Starts at 09:00 GMT
Big Data Analytics with PowerPivot and Power View
Presenter: Peter Myers

Session 23 (CLD) – Starts at 10:00 GMT
Best Practices and Lessons Learned Using SSIS for Large Scale Azure Data Movement
Presenter: Steven Howard

Session 24 (PD) – Starts at 11:00 GMT
Mentoring for Professional Development
Presenter: Andy Warren

* Please be sure to check our time zone guide for exact times in your area. This 24 Hours of PASS event begins at 12:00 GMT on September 20 and runs for 24 consecutive hours.

Share

By

How To Prevent SELECT * The Evil Way

SELECT * FROM…when administrators see that from developer code, we generally tend to cringe. Why? In a nutshell, it’s terrible on a few fronts. First, typically that SELECT * FROM that gets written (a lot of the times) lacks a WHERE clause. What’s the problem? Well, that pulls back every single row from the table.

Sure, that may not be too bad on a table with a few hundred rows but what about on a table with millions? That could cause a lot of performance problems since you’re trying to read all the data off disk (again, potentially).  Secondly, do you really need all those rows and/or columns? It’s a waste of time and resources to pull back every column from a table if your application is only going to be using a few of them anyways.

So how do we prevent this? Well I recently learned an extremely evil way of preventing such a query. I’d like to start off with, this was NOT my idea. I learned this trick from Adam Jorgensen (Blog | Twitter). I’d also like to add this disclaimer:

DO NOT JUST GO DO THIS IN YOUR PRODUCTION ENVIRONMENT! I am not responsible for whatever evil you turn loose upon your environment. Always test things out in a development environment first and get proper approvals before making any changes.

Pure Evil Method

This method is actually evil in its simplicity. What we’ll be doing is adding a new column to the existing table. The “trick” is that this will be a computed column whose formula will cause an error, specifically a divide by zero error. As shown in screenshot below, create the new column on the table and call it something obvious like ‘DoNotSelectAll’. In the Column Properties window, under the Table Designer section, there is a property called Computed Colum Specification. In the formula section, enter (1/0). Save your table.

image

Now if I try to do my SELECT * on this table, I’ll get this lovely message:

image

Alright, we learned our lesson, now we’ll explicitly name our columns that we need:

image

Now this last query worked but notice how I didn’t put a WHERE clause so it pulls back all rows anyways? Yup, your users can still pull back everything, but at least they’re not doing a SELECT *. Also keep in mind, if you’re used to right-clicking that table in SSMS and selecting TOP 1000, with this column in place it will error for you as well. What are your alternative options?

LESS EVIL METHODS

Abstraction

Another way to control this kind of behavior is by not letting users hit base tables at all. You could create Views that have queries in them that limit rows returned. This way a user can do a SELECT * on a view, but the underlying code of the view itself is limiting row returns.

Depending on your situation, this could work and it could not. If the user needed very specific data returned that wasn’t in that limited pool of results could adversely affect whatever process they’re using the data for.

Another option is wrapping everything in stored procedures and granting users access to executing stored procedures rather than querying tables and views. On the one hand, could be good since you’re encapsulating the code. Users can pass parameters to stored procedures so you could make the queries somewhat dynamic.

Handbrake

In SQL Server 2008 they introduced a feature called the Resource Governor. This feature allows you to throttle resources on queries based on custom functions and groupings you specify. Yes, it’s an Enterprise Edition feature but it can be well worth it if you’re having resource-related issues due to runaway queries.

Now this feature will NOT prevent users from doing SELECT * –type queries, however you can throttle how much resource is allocated toward a query so you can at least control how badly it’ll affect you.

Security

My friend Brian Kelley (Blog | Twitter) will probably appreciate this one. Be stringent with the accesses you grant! Grant users only the accesses they need. Also, ff you don’t want users banging against your transactional systems directly, think about setting up a dedicated/isolated reporting environment and point the users there instead.

The reporting box you stand up doesn’t have to be (necessarily) as beefy as your transactional system and you can setup customized security on that database. This is especially helpful for when the transactional system is a vendor application which you can’t make any modifications to the code.

Do you have any other suggestions/tricks to help prevent crazy user queries? Let’s hear it in the comments!

Share

By

Identify and Alert for Long-Running Agent Jobs

Being a DBA is like being a train conductor. One of the biggest responsibilities is making sure all jobs are running as expected, or making sure “all the trains are running on time” so to speak. As my partner-in-crime Devin Knight (Blog | Twitter) posted earlier, we have come up with a solution to identify and alert for when SQL Agent jobs are running longer than expected.

The need for this solution came from the fact that despite my having alerts for failed agent jobs, we had a process pull a Palin and went rogue on us. The job was supposed to process a cube but since it never failed, we (admins) weren’t notified. The only way we got notified was when a user finally alerted us and said “the cube hasn’t been updated in a couple days, what’s up?”. Sad trombone.

As Devin mentioned in his post the code/solution below is very much a version 1 product so if you have any modifications/suggestions then have at it. We’ve documented in-line so you can figure out what the code is doing. Some caveats here:

  • This solution has been tested/validated on SQL Server 2005 (SP4) and 2008 R2 (SP1).
  • Code requires a table to be created in a database. I’ve setup a DBAdmin database on all servers here for custom scripts for DBAs such as this, Brent Ozar’s Blitz script, Ola Hallengren’s maintenance solution, Adam Machanic’s sp_whoisactive, etc. You can use any database you’d like to keep your scripts in but just be aware of the USE statement at top of this particular code
  • This solution requires that you have Database Mail setup/configured
  • To setup this solution, create an Agent job that runs ever few minutes (we’re using 5) to call this stored procedure
  • FYI, I set the mail profile name to be the same as the server name. One – makes it easy for me to standardize naming conventions across servers. Two – Lets me be lazy and code stuff like I did in the line setting the mail profile name. If your mail profile is set differently, make sure you correct it there.
  • Thresholds – This is documented in code but I’m calling it out anyways. We’ve set it up so that any job whose average runtime is less than 5 minutes, the threshold is average runtime + 10 minutes (e.g. Job runs average of 2 minutes would have an alert threshold of 12 minutes). Anything beyond a 5 minute average runtime is controlled by variable value, with default value of 150% of average runtime. For example, a job that averages 10 minute runtime would have an alert threshold of 15 minutes.
  • If a job triggers an alert, that information is inserted into a table. Subsequent runs of the stored procedure then check the table to see if the alert has already been reported. We did this to avoid having admins emailed every subsequent run of the stored procedure.

CODE (WARNING: This code is currently beta and subject to change as we improve it)

Last script update: 7/24/2012

Change log:

7/12/2012 – Updated code to deal with “phantom” jobs that weren’t really running. Improved logic to handle this. Beware, uses undocumented stored procedure xp_sqlagent_enum_jobs

7/24/2012 - Updated code to v 1.16 to deal with email alerts error. Removing code from blog post and asking folks to instead download directly from download link below. Formatted code on blog makes it messy and a pain when updating.

Download script link – Click here

Got any feedback/comments/criticisms? Let me hear them in the comments!

Share

By

Monday Morning Mistakes: Connecting to Wrong Environments

Today’s Monday Morning Mistake issue is another painfully common one that we all run into at some point. You’ve got some new code given to you by a developer that’s supposed to be ran against QA (because you’re a good DBA and you don’t grant developers access to anything outside of Development). Part of the code drops some existing tables and recreates them with new schemas, objects, etc. You hit the execute button when the blood drains from your face when you realize the instance you were connected to was a production instance!

Issue

You have multiple instances of SQL Server to manage and you want a visual indicator in SQL Server Management Studio (SSMS) to let you quickly know which environment you’re connected to.

Read More

Share

By

BIxPress 3.5–Now With More Awesome!

Pragmatic Works has now released version 3.5 of the award-winning BIxPress software! For those not familiar with BIxPress it’s a tool that helps you develop SSIS/SSAS solutions faster, easily/quickly deploy SSIS packages, monitor performance SSIS packages and much, much more!

So what’s new with 3.5? In addition to now having SQL Server 2012 support (SSIS), you may have noticed in previous released that Reporting Services didn’t get much love. That all changes with this release as we now have incorporated health monitoring of Reporting Services instances called Reporting Performance Monitor!The new dashboard includes Average Runtime for Reports, Longest Running Reports, Report Memory Usage, Average number of Rows, Active Users, et cetera:

BIxPress Reporting Console Dashboard

Another great feature in this release is the Report Deployment Tool. This feature lets you quickly and easily deploy your Reports, Folder Structures, and Data Sources between Reporting Services instances!

BIxPress Report Deploy

One more major update in this release is the update to the SSIS Package Performance monitoring interface. Same great insight, new cleaner interface!

BIxPress Package Performance Report

What are you waiting for? If you already have BIxPress, you can update through the regular process. Don’t have it yet and want to try it out? Download a trial copy today! Also, if you’re a Microsoft MVP don’t forget Pragmatic Works offers NFR licenses to MVPs, so go get your copies today!

Share

By

SQL Server 2012: Biggest Little Core-house

You sure do got a 'purty motherboard...

Now that SQL Server 2012 is generally available to the public, many companies are looking at the new platform and trying to figure out how to move to it and take advantage of all the new cool features. Unfortunately, some folks haven’t noticed/been aware of some of the fine print that came along with this release. I think at this point, it’s safe to say, mostly everyone knows about the change to a core-based licensing model. The part that is now causing major heartache with folks is an issue that Aaron Bertrand (Blog | Twitter) recently brought up in his post that I HIGHLY recommend you go read ‘A cautionary tale about grandfathering CAL licenses in SQL Server 2012 Enterprise’.

To quickly summarize the issue, there’s a 20-core limit in place with Enterprise edition (UPDATE – Thanks for this clarification point Aaron: to be clear, the 20-core restriction *only* applies if you upgrade Server + CAL via SA. With core limit = licensed. In other words if you buy a 64-core Enterprise license, you get to use all 64 cores.)! In a nutshell what that means is if you have a server with 4 8-core processors for a total of 32 cores, and you install SQL Server 2012 on it licensed previously by CAL with SA, SQL Server will only “see”/use 20 of those cores! This is a huge deal and one I’m really surprised has not been addressed more vocally from the user community. I’ve already seen a couple of statements as strong as “based on this, we will seriously start looking at another platform”. My hope is that if enough noise is made from the customer base, Microsoft will at least up that limit similar to how VMware changed their licensing for vSphere 5 based on customer lashback. Have you or your company run into this issue yet? Let me hear your thoughts in the comments.

Yet Another Update: Per Steve Jones’ request in comments, here’s a visual indicator of scenarios and how it could affect you

Edition Licensing today Licensing in 2012 Can I use all my cores?
Enterprise Per Processor (4 procs, 4 cores) License all 16 cores (buy 8 core packs*) Yes
Enterprise Per Processor (4 procs, 8 cores) License all 32 cores (buy 16 core packs) Yes
Enterprise Grandfathered Server (4×4 cores) + CALs (with SA) No core packs purchased due to agreement Yes
Enterprise Grandfathered Server (4×8 cores) + CALs (with SA) No core packs puchased due to agreement No (limit to 20)
Enterprise Server (4×8 cores) + CALs (with SA) License 32 cores (buy 16 core packs) Yes

*Core pack comes in pairs so 16 cores requires 8 packs to be purchased, etc.

Additionally, here’s link to licensing FAQ from Microsoft. The one you want to pay attention to (for this scenario) is the last one:

How will the 20 Core Server limit work for SQL Server 2012?

Existing Enterprise Edition licenses in the Server + CAL licensing model that are upgraded to SQL Server 2012 and beyond will be limited to server deployments with 20 cores or less. This 20 core limit only applies to SQL Server 2012 Enterprise Edition Server licenses in the Server + CAL model and will still require the appropriate number/versions of SQL Server CALs for access.

Addendum: Please note, this blog (or any other) should be your definitive source for licensing information. For that, always ALWAYS contact your local Microsoft rep as they have the details of your specific agreements and options. This post is meant for informational purposes only.

Share

By

Pragmatic Works Software for MVPs

PW_logo_lgThis is just a quick post to remind folks who are current Microsoft MVPs that Pragmatic Works offers NFR licenses of its software! This NFR offer includes:

  • BIxPress – Audit, Notify, Deploy and Manage SSIS Packages
  • BIDocumenter – One Stop Documentation Solution for SQL Server, SSAS, SSRS and SSIS
  • Task Factory – Collection of high performance SSIS components
  • DTSxChange (10-pack) – Profile, Convert and Monitor. One stop DTS to SSIS Migration Solution

To get your licenses simply email our Sales folks and they’ll be happy to get you started! If you’re not an MVP and would like to try our software, you can download trial versions of all our software as well.

Share

By

STOP! Consolidate and Listen

I just wrapped up my 24 Hours of PASS session on consolidation. A big THANK YOU again to everyone in attendance, who kindly put up with my horrendous rapping “skills”! As promised, below is the link to the slide deck. In the presenter’s notes you’ll find some good links and resources for consolidation.

Download link: STOP_Consolidate_and_Listen_24HOP (ZIP file)

If you have any additional questions around consolidation, virtualization or my general sanity feel free to leave them in the comments section below, thanks!

Share

By

Monday Morning Mistakes: Not Setting Memory Limits

Welcome back to another addition of Monday Morning Mistakes series. Today’s issue is one I tend to run into quite often with clients and is an important topic to know about as a database administrator. Without further ado, let’s get to our issue

Issue

You have SQL Server database engine installed on a system with other services such as Analysis Services, Reporting Services and/ or Integration Services and you constantly seem to run out of memory. Restarting service seems to fix the issue temporarily but some time later the same problem returns.

Read More

Share

By

Monday Morning Mistakes: SSIS Expressions Not Evaluating Correctly

M3logo

SSIS Expressions

Expressions in SSIS are great. They allow you to create dynamic values for all sorts of stuff like variables, connection strings, properties for almost anything, etc. One huge issue that tends to trip up a lot of folks, especially those new to SSIS, is the evaluation of those expressions when using variables.

The Issue

You create an SSIS variable with an expression but at runtime the expression is not evaluating as expected. Instead the expression value is using the default/static value of the variable.

Quick Answer

Make sure property for ‘Evaluate as an Expression” on the variable is set to True. Without it being set to true, the variable will evaluate to the hard set value.

Read More

Share