Mastodon

Yes, I’ve been bad about updating.  I was traumatized by a bunch of friends getting laid off from a former employer, and then I had two very busy weeks.

This system of tracking total file usage, DBs only, came up talking to someone recently (Eric, are you reading this?) and I thought I would share.

First, there’s the query to get SQL Server to tell you how much space it’s using.  Sure, you could map the drive or remote out to the server (unless, you know, you can’t), but this is a good sanity-checking number that you can compare to what the OS says you’re using.  I had an issue recently where that was handy information.  I may have downloaded this off the internet somewhere, or may have written it.  I forget which.  So, if I just stole your query and posted it as my own, I’m sorry!  (I did Google and didn’t find it.)

declare @totalsize float,
@bytes float,
@kb float,
@mb float,
@gb float,
@tb float

CREATE TABLE #temp
(
size int
)

insert into #temp (size) EXECUTE sp_msforeachdb ‘SELECT size FROM [?].sys.database_files’

select @totalsize = SUM(size) from #temp
set @bytes = (@totalsize * 8192)
set @kb = (@bytes / 1024)
set @mb = (@kb / 1024)
set @gb = (@mb / 1024)
set @tb = (@gb / 1024)

drop table #temp

–print @totalsize
–print @kb
–print @mb
print @gb
print @tb

(Download.)

You can tell by which two aren’t commented out the data sizes I’m generally dealing with. Comment or uncomment as suits your situation.

Well, that’s fine and dandy, but you might have more than one server, or more than one instance, or you might want to track those numbers over time (which is what I was going for, yes).  I do, so I have a table:

CREATE TABLE [dbo].[datasize](
[id] [int] IDENTITY(1,1) NOT NULL,
[instance] [varchar](50) NULL,
[datasize] [float] NULL,
[dateadded] [datetime] NULL,
CONSTRAINT [PK_datasize] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY]

GO

SET ANSI_PADDING OFF
GO

ALTER TABLE [dbo].[datasize] ADD  CONSTRAINT [DF_datasize_dateadded]  DEFAULT (getdate()) FOR [dateadded]
GO

(Download.)

I generally store the GB number in the datasize column, because some instances are bigger than others (GB is my smallest sane number), but you can store whichever size is meaningful for your situation.  Just, you know.  Always store the same measure (duh, you knew that).

Sadly, I have not yet automated populating this table because I’m working across two domains that don’t trust each other, so I’m C&Ping the results from six instances.  That’s not that bad.  You could probably easily automate that, though.  I’m automating tracking log size and usage (for reasons I may address later), so, you know.  Easy peasy.  Make SQL Agent do it for you. (I’m a sysadmin.  We’re lazy.)  Especially if you have, like, a million instances.

Okay, so you have this data.  Now what?

I have a view that’s the current data:

CREATE VIEW [dbo].[current_size_by_instance]
AS
SELECT     instance, datasize, CONVERT(varchar, CAST(DATEDIFF(dd, 0, dateadded) AS Datetime), 110) AS date
FROM         dbo.datasize
WHERE     (dateadded >
(SELECT     MAX(CAST(DATEDIFF(dd, 0, dateadded) AS Datetime)) AS Expr1
FROM          dbo.datasize AS datasize_1))

GO

(Download.)

Make SQL sum or average or any other kind of slicing and dicing you want there. However, the really shiny part for me is tracking data usage:

CREATE TABLE #filegrowth (instance varchar(255), maxdata float, mindata float, maxdate datetime, mindate datetime)

insert into #filegrowth (instance, maxdata, maxdate)
(select instance, datasize as maxdata, dateadded from CPMaintenance.dbo.datasize where dateadded  >
(SELECT     CONVERT(varchar, MAX(dateadded), 112)
FROM          Maintenance.dbo.datasize))
insert into #filegrowth (instance, mindata, mindate)
(select instance, datasize as mindata, dateadded from Maintenance.dbo.datasize where dateadded  <
(SELECT     CONVERT(varchar, MIN(dateadded) + 1, 112)
FROM          Maintenance.dbo.datasize))

select instance, (MAX(maxdata) – MAX(mindata)) as filegrowth, datediff(d,(max(mindate)),(MAX(maxdate))) as timeframe_in_days from #filegrowth group by instance
union
select ‘total’ as instance, (SUM(maxdata) – SUM(mindata)) as filegrowth, datediff(d,(max(mindate)),(MAX(maxdate))) as timeframe from #filegrowth

drop table #filegrowth

SELECT sum(datasize) / 1024 as TB, CONVERT(varchar, CAST(DATEDIFF(dd,0,dateadded) AS Datetime), 110) as date
FROM Maintenance.dbo.datasize GROUP BY CAST(DATEDIFF(dd,0,dateadded) AS Datetime)

(Download.)

I actually have that as a stored procedure so I don’t have to open a file to load a script. (Lazy!)  So, you can, too.  Just paste that as the main part of the procedure into CREATE STORED PROCEDURE SP_OMGLAZYBUM and go from there.

And yes, this is in addition to SQL Monitor and Nagios.  But it came in handy recently.  We were having DAS issues and there was some question about SQL’s file usage, and I was able to confirm based on numbers from last week and fresh numbers that yes, those are sane numbers.

Pin It on Pinterest

Share This