## SQL Server Express Maintenance

Don’t you love looking at your old scripts?

I’ve rewritten all of our maintenance jobs, including the jobs in task manager that handle SBSMonitoring.  I’ve rid myself of Windows Internal Database, but was still using the script I wrote for it.  Not any more!

I decided to use and abuse Ola Hallengren’s scripts, so what I’m using for SBSMonitoring is:

Download and install CommandExecute.sql, DatabaseIntegrityCheck.sql, and IndexOptimize.sql. Then, create the following .sql file:

Declare @4wks datetime
set @4wks = DateAdd(hh, DateDiff(hh, 0, GETDATE()) – 672, 0)

EXECUTE [dbo].[IndexOptimize] @Databases = ‘ALL_DATABASES’, @OnlyModifiedStatistics = ‘Y’
EXECUTE [dbo].[DatabaseIntegrityCheck] @Databases = ‘ALL_DATABASES’

exec msdb.dbo.sp_delete_backuphistory @oldest_date = @4wks

IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = ‘dbo’ AND TABLE_NAME = ‘CommandLog’))
BEGIN
DELETE FROM [dbo].[CommandLog] WHERE DATEDIFF(dd,StartTime,GETDATE()) > 30
END

IF @@SERVERNAME = ‘MYSBSSERVER\SBSMONITORING’
BEGIN
EXECUTE [SBSMonitoring].[dbo].[CleanupDatabase]
END

This can be called with sqlcmd in Task Scheduler.  (“sqlcmd -E -SmyServer -i c:\path\to\scriptname.sql”)

I’ve also started using Microsoft’s script to reindex SUSDB.  I migrated that into SQL Server, so it’s an actual job, but SBS doesn’t support migrating the SBSMonitoring database out of the SBSMonitoring 2005 Express instance that’s installed by default, hence the scripty maintenance.

Filed under Uncategorized

## Backups FTW!

I could say it’s because I’m a DBA and SysAdmin and it’s an occupational hazard, and frequently do.  I could also say it’s because I’m neurotic.  😉  But it’s probably the most honest to say, “Because I once experienced a catastrophic data loss…”

For whatever reason–pick one or more above–I’m compulsive about backups of my home equipment.  I actually use two different backup programs for different use cases:

1. “My house burned down.”
2. “I accidentally deleted a file.”
3. “My computer is so dead that I can’t boot it to get the data off it, so I bought a new one.”
4. “My hard drive died.”

For the first three, just about any cloud-based solution will work.  The first product I started using I picked for a highly dorky reason:  Neil Gaiman, my author crush, tweeted about it.  Crashplan protects me against data loss caused by stolen equipment and burning houses, restores accidental deletions, and when my old laptop died it painlessly put my data on the new laptop.  (Okay, yes, I could have cracked the case, put the old drive in an external enclosure, etc.  The point is that I didn’t have to.)

However, it’s not really intended for use #4.  Well.  You could use it for use #4, after reinstalling the OS and all your apps, or putting the vendor image back on and reinstalling your apps, or whatever.  However, what I really want for use #4 is something that pulls an image.  For that, I use Acronis, which I picked because my office uses it.

Acronis has a cloud service, but my main desktop has iTunes, and multiple season passes of NCIS and Doctor Who and the like, so I’m using 390 GB.  I’m a little… iffy… about uploading 390 GB worth of data a second time.  I also haven’t compared the cloud storage prices.  However, I <3 Acronis as of last week, because my hard drive failed.  In my main desktop.  The one with all that data on it.

I replaced the hard drive with a similarly-sized non-dead drive, booted it off the previously-burned recovery media, set it to restoring, and went to bed.  The next morning, it was like nothing ever happened.  Well.  I had to reapply a Windows update and recover a file.

So, my point is… Neil Gaiman is hot enough that I buy backup programs because he tells me to?  😉

Seriously, that could have been painful and traumatic, and instead it just ran overnight and all was love and goodness.  Because I do backups.  Backups FTW!

Filed under geekiness

## The Career Venn Diagram

Every now and then, people ask me for career advice.  Or, you know.  They start complaining about their careers and I annoy them with unsolicited advice.  😉  Either way, my advice for choosing a career comes down to:

1. Things you can do
2. that people will pay you to do
3. that don’t suck.

As a Venn Diagram, it looks like this:

(Sorry about the graphic. MS Paint FTW!)

You’d be surprised–or maybe you wouldn’t–by the resistance I get to this suggestion.  “Well, that’s fine for you because you’re in IT, but no one will pay me to sing opera/write poetry/play World of Warcraft/play ice hockey/go to Mars/etc.”

I didn’t specify that it had to be your lifelong dream.  I know that little children usually say they want to be an astronaut, or a ballerina, or write books, or be the President, and don’t generally say they want to be accountants or purchasers or write limericks about snack foods.  I just said a job that doesn’t suck.  Apparently, many people believe that if it doesn’t suck, no one will pay them to do it.  Not true!

Lots of people settle for two out of three.  (Things you can do that people will pay you to do is the only viable two out of three.  You can do things you can do that don’t suck for no money if you’re independently wealthy, or you can try to do things that don’t suck that pay well but that you’re ill-suited to doing.  For most people, neither of those will last long.)  Of course, you need to eat, but I always urge people to go for three out of three.

So, if you’re someone who’s considering a transition–like someone I know–think of the Venn diagram when you’re doing your research.  Don’t invest a lot of time and energy into moving into a high-paying direction you hate, or a high-unemployment and/or low-paying direction.  They pay some people to play ice hockey, but if you’re 45 and have bad knees it probably won’t be you.  That doesn’t mean your life has to suck.  There are alternatives.

Filed under career

## Fun with PXE, redux.

You may recall that I’m serving up not only Windows Deployment Services capture and deploy but also Acronis recovery media over PXE.

Well, now I’m also serving up Windows Defender Offline. When chosen from the PXE boot menu, it boots into Windows PE, automagically launches Defender, updates the definitions, and runs a full scan.  And then sits there and waits for you to see the results and hit reboot.

Here’s what I did.  This assumes you have Windows Deployment Services and the Windows Automated Installation Kit.

2. Run the tool to create an ISO (“to burn to CD/DVD later”).
3. Mount the ISO with Virtual CloneDrive (or similar product).
4. Fish boot.wim out of /sources on the virtual CD drive.
5. Create a directory to mount the wim into if one doesn’t already exist.
6. Mount boot.wim into that directory. Dism /Mount-Wim /WimFile:C:\temp\boot.wim /MountDir:C:\temp\wim /Index:1
7. If you have a directory full of drivers for your environment, you can pre-emptively add them with: Dism /Image:C:\temp\wim /Add-Driver /Driver:c:\drivers /Recurse /ForceUnsigned. (If not, cross your fingers and carry on with the next step.)
8. Copy mpam-fex64.exe (or similarly named file for x86)  into the top level of the wim from the top level of your virtual CD drive.
9. Save and unmount the wim.  Dism /Unmount-Wim /MountDir:C:\temp\wim /Commit

I’ve also rewritten a lot of my batch scripts to be PowerShell, but I think that’s a post for another day.

Filed under Uncategorized

## They’re using HOW MUCH space?

This assumes a convention where databases are named after the responsible party with an underscore and a version number or description.  It also assumes that running this script is faster than remoting to the server and looking at the directory.  😉

declare @dbname varchar(255), @CurrentRow int, @RowsToProcess int, @SQL nvarchar(max), @size float

CREATE TABLE #dbs ( RowID int not null primary key identity(1,1), dbname varchar(255), owner varchar(80), size float )

insert into #dbs (dbname) select name from sys.databases where name not in ('master','msdb','model','tempdb') set @RowsToProcess=@@ROWCOUNT

SET @CurrentRow=0 WHILE @CurrentRow<@RowsToProcess BEGIN SET @CurrentRow=@CurrentRow+1 SELECT @dbname=dbname FROM #dbs WHERE RowID=@CurrentRow set @SQL = 'update #dbs set size=(SELECT sum(size) FROM ' + @dbname + '.sys.database_files) where dbname=''' + @dbname + '''' EXEC SP_EXECUTESQL @SQL set @SQL = 'update #dbs set owner=(SELECT CASE WHEN CHARINDEX(''_'', dbname) > 0 THEN LEFT(dbname, CHARINDEX(''_'', dbname)-1) ELSE dbname END AS Owner FROM #dbs where dbname=''' + @dbname + ''') where dbname=''' + @dbname + '''' EXEC SP_EXECUTESQL @SQL END

select owner, COUNT(owner) as DBCount, ((((SUM(size) * 8192)/1024)/1024)/1024) as TotalGB from #dbs group by owner

drop table #dbs

Filed under scripting

## SFTP – copy smallest files first

I needed to SFTP a bunch of files ranging in size from 40 meg to 130 gig.  The default behavior of “mput *” is to copy the files in alphabetical order, but what happens then is that all the smaller files pile up behind the large ones.  This is bad because there’s a time limit and I want to copy as many as possible before the time limit passes.  The solution?

Pre-sort them by size, smallest to largest.

These are encrypted database backups, so the script requires a SQL Server instance name.  You can, of course, edit that behavior.

This depends upon PuTTY and Pageant.  Create an SSH profile with key-based authentication and use Pageant to handle the key as described in the link.  My SSH profile in this case is named “Atlanta” because that’s where the files are being copied.

This also assumes that you have directories in the SFTP server named with the pattern servername-instance because I’m running multiple streams.  Also, the last thing the database backup does is copy a file called “flag.txt” to the backup directory so the script knows when backups are done and can start copying.

set instance=%~1
if %instance%z==z goto :_help

set flag=”f:\full backups\%instance%\flag.txt”

:loop

IF EXIST %flag% (
GOTO file_found
) else (
ping 127.0.0.1
GOTO loop
)

:file_found

dir “f:\full backups\%instance%” /os /b /a-d-h > c:\temp\%instance%.txt

echo lcd “f:\full backups\%instance%” >> c:\temp\%instance%-control.txt
echo cd %computername%-%instance% >> c:\temp\%instance%-control.txt

FOR /F %%L IN (c:\temp\%instance%.txt) DO (
echo put %%L >> c:\temp\%instance%-control.txt
)

echo quit >> c:\temp\%instance%-control.txt

“c:\Program Files (x86)\PuTTY\psftp.exe” Atlanta -b c:\temp\%instance%-control.txt

del c:\temp\%instance%-control.txt
del c:\temp\%instance%.txt

goto :_end

:_help
@echo.

:_end
exit

Linked as a separate file because smart quotes are not smart.

Filed under scripting

## Fun with PXE

At one of my previous positions, I was in charge of the PXE server.  I like PXE, so I set up a PXE server here, too.

It was a lot easier at my current place of employment, because I had access to the routers and the DHCP server.  It ended up being a straightforward Windows Deployment Services install.  I’m not only serving up boot and install images but also Acronis Recovery Media.  🙂

Alas, at the previous place, I was not allowed access to routers or DHCP.  This meant I had a lot of ‘splainin’ to do, and apparently it broke some of the network guys’ brains, so…

PXE uses DHCP (option 60). You can run PXE on a network with separate PXE and DHCP servers, but PXE does need to listen on DHCP.

It kind of goes like this:

PXE Client:  DHCPREQUEST, p.s. I'm a PXE Client.
DHCP Server:  DHCPACK, Here's your IP! p.s., PXE over there ->
PXE Server:  DHCPACK, Hello, I'll be your PXE server today.


Note:  That second DHCPACK does not contain an IP number.  It only contains PXE information. (So, dude who kept demanding that I look at the scope of the IP addresses I was handing out?  You fail Reading/Listening Comprehension 101.)

So you’re going to need to make changes to the DHCP server (to specify to PXE clients where to get PXE services) and possibly the routers as well (to specify the PXE server as a DHCP helper).  You also need to make sure that the PXE server is answering on the DHCP ports (open them in the firewall).  If you have separate PXE and DHCP servers, the PXE server doesn’t need to serve up IP numbers, but it does need to answer (DHCPACK, option 60).

This is a pretty good description of how someone else got it working.

1 Comment

## Nagios Event Handlers on Windows

Nagios event handlers are WHERE IT’S AT, BABY, YEAH!  There are some services that I can just automagically restart without any problems.  (WSUS, SQL Agent, etc.) This way, instead of notifying me, Nagios can just fix the problem for me and We Need Never Know.

These instructions assume I’m running NSClient++.

The script is

@echo off
net start %1
@exit 0

(This is kept intentionally minimal so it’ll be reusable.)  I’m referring to this in nsclient.ini, under the “; A list of scripts available to run from the CheckExternalScripts module. Syntax is: <command>=<script> <arguments>” header.

restartwsus=scripts\runcmd.bat wsusservice

On the Nagios server, I’ve defined the check in commands.cfg as:

define command{
command_name restartwsus
command_line /usr/lib/nagios/plugins/check_nrpe -H '$HOSTADDRESS$' -c restartwsus
}

and in the service definition as:

define service{
use                     generic-service
host_name               wsusserver
service_description     WSUS
contacts                me
check_command           check_nt!SERVICESTATE!-d SHOWALL -l WsusService
event_handler           restartwsus
}

It looks like this is copy and paste-able.

Filed under monitoring

## A script for migrating to a new server

I stripped out a piece of company-specific logic, but…

List the databases you want to move in a file named “control.txt.”  (To migrate the entire server, paste the output of “select name from sys.databases where name not in (‘master’, ‘msdb’, ‘model’, ‘tempdb’).”)

Replace the example values in the below script with real information:


@ECHO ON

set controlfile=control.txt

set newipmdf=\\newserver\g$set newipldf=\\newserver\e$
set oldserver=oldserver\Prod1
set oldmdfpath=d:\prod1
set newmdfpath=g:\data
set copymdfpath=m:\data
set newserver=newserver
set oldlogpath=e:\prod1
set newlogpath=e:\log
set copylogpath=l:\log
set movedmdfpath=%oldmdfpath%\moved
set movedldfpath=%oldlogpath%\moved

mkdir %movedmdfpath%
mkdir %movedldfpath%

net use m: %newipmdf%
net use l: %newipldf%

SETLOCAL DISABLEDELAYEDEXPANSION
FOR /F %%L IN (%controlfile%%) DO (
SET "line=%%L"
SETLOCAL ENABLEDELAYEDEXPANSION
ECHO !line!
sqlcmd -E -S!oldserver! -Q"EXEC master.dbo.sp_detach_db @dbname = N'!line!'"
copy "!oldmdfpath!\!line!.mdf" !copymdfpath!
copy "!oldlogpath!\!line!_log.ldf" !copylogpath!
sqlcmd -E -S!newserver! -Q"CREATE DATABASE [!line!] ON ( FILENAME = N'!newmdfpath!\!line!.mdf' ),( FILENAME = N'!newlogpath!\!line!_log.ldf' ) FOR ATTACH"
move "!oldmdfpath!\!line!.mdf" !movedmdfpath!
move "!oldlogpath!\!line!_log.ldf" !movedldfpath!
ENDLOCAL
)
ENDLOCAL

net use m: /z
net use l: /z


(The redacted data was a sharding-related item–updating a metadata table with the new location.  You can, of course, add your own logic after the attach, if needed.)

Filed under scripting

Yeah.

Twitter changed their authentication, and my old Twitter notifications (based on Ed Voncken’s work) seized up and failed.  I had to update the python tweepy library to get them to work.