Loading...
 

Greg's Tech blog

My technical journal where I record my challenges with Linux, open source SW, Tiki, PowerShell, Brewing beer, AD, LDAP and more...

Processing Object Properties in the PowerShell Pipeline

Sunday 21 of October, 2012

I was running a quick AD query via Powershell today and needed to export the results to a csv. The results contained two fields (lastLogonTimestamp and pwdLastSet) that are not human readable, but I needed them to be. There is a quick transform you can make on these field to convert them to a datetime value. It is simply:

Convert system time to datetime
[datetime]::FromFileTimeUTC($_.lastLogonTimestamp)


That part is easy, but how do we do that in the pipeline so all the objects get converted before the export? To do so, we can create a calculated field using a specially crafted hash-table to describe the members to Select-Object. So this:

Select 'before'
select name, description, distinguishedname,pwdlastset,lastLogonTimestamp


becomes:

Select after
select name, description, distinguishedname,@{LABEL='pwdlastset';EXPRESSION={[datetime]::FromFileTimeUTC($_.pwdlastset)}},@{LABEL='lastLogonTimestamp';EXPRESSION={[datetime]::FromFileTimeUTC($_.lastLogonTimestamp)}}


Looking a bit more in-depth, the pwdLastSet was modified to specify the column LABEL and the EXPRESSION in an array, with the EXPRESSION now set to the new, calculated value.

@{LABEL='pwdlastset';EXPRESSION={[datetime]::FromFileTimeUTC($_.pwdlastset)}}

in case you are wondering, my full, one-liner, is a report on all disabled accounts in our directory

Get-Disabled Accounts
Get-ADUser -LDAPFilter {(useraccountcontrol:1.2.840.113556.1.4.803:=2)} -properties pwdlastset,lastLogonTimestamp,description| select name, description, distinguishedname,@{LABEL='pwdlastset';EXPRESSION={[datetime]::FromFileTimeUTC($_.pwdlastset)}},@{LABEL='lastLogonTimestamp';EXPRESSION={[datetime]::FromFileTimeUTC($_.lastLogonTimestamp)}}|export-csv d:\data\disabled_accts.csv -notypeinformation

Using Windows Server Web Platform Installer

Saturday 20 of October, 2012

Quick instructions for installing using the Windows Web Platform Installer tool in Windows server 2008 & Win 7. Note - if you are installing from Microsoft's Web Gallery, the process is much more automatic. The following procedure is necessary only if you need to install from the stand-alone package.

  • Open Server Manager and navigate down Roles/Web Server to the Default Web Site. Select the Default Web Site.
  • Using the Features View, in the Action Pane (Right), select Import Application(option missing? - see below)*
  • Browser to the zip file, select it and click Open, then Next.
  • The Application package will open with all steps selected. Click Next.
  • Select your database options (MySQL, and Create New are defaults) - Click Next.
  • Enter a new application path and database server info if you don't like the defaults. Scroll down to enter the sa password and the tiki db user password.
  • Click Next to run the install
  • After the install completes, open up a browser and point it at the site. (Default URL is http://localhost/tiki).
  • At this point the Tiki install will begin


(*Note: if this option is missing, download and install the Web Platform Installer v4 from here http://www.microsoft.com/web/downloads/platform.aspx. You will also need to install MySQl from the database category of Web PI as well as php from the Frameworks category)

Posting large zip files to Tiki image galleries

Thursday 23 of August, 2012

I have a bunch of photos from Africa I want to upload my tiki-based site. See the Zambia blog for more about that!. Now Tiki is great, but who wants to post phots one at a time? Fortunately, tiki offers a solution - it will unzip zipped files and post the photos. So I tried it and it failed. I had to make 4 changes to make this work. Here are the errors and the associated fixes: -First error: PHP Warning: upload_max_filesize of 2097152 bytes exceeded This was fixed with a change to the _upload_max_filesize_ entry in php.ini The default is 2MB. I upped it to 40M -Second error: PHP Fatal error: Allowed memory size of 25165824 bytes exhausted This was fixed with a change to _memory_limit_ in php.ini. I upped it to 48MB. -Third error: POST Content-Length of 14402734 bytes exceeds the limit of 8388608 bytes This was fixed with a change to _post_max_size_ in php.ini. I upped it to 40M. -Fourth error MAX_FILE_SIZE of 10000000 bytes exceeded This required editing tiki/templates/tiki-upload_image.tpl to change the MAX_FILE_UPLOAD parameter for the upload page.

Nagios, NSClient++ and Powershell

Monday 13 of August, 2012

I did a talk recently at FOSSCON (cache) on how to monitor anything with Nagios. The focus of the talk was how to write NRPE checks to check obscure things in your IT environment. One of the checks I presented used a PowerShell script to check the files in a web site to ensure they hadn't changed. During development of and testing for my demo, I discovered that while my script was returning the correct errorlevel (in my case a 2), NSClient++ would only return 0 or 1.

The zero or one indicated to me that Powershell was merely reporting whether it exited normally or not.

Here is the first configuration I tried in NSClient.

[NRPE Handlers]
check_sitesign=cmd /c echo check_sitesign.ps1|powershell.exe -command -

This config, calls cmd.exe with the /c switch (run & exit), echos the script name and pipes it to powershell on STDOUT.
Powershell is executed with the -command parameter (execute the command and exit and told via the '-' to read that command from the STDIN.

After some investigation and testing from the command line it became clear the -command was an issue. The script I was passing it was seen as a single command and it either succeeded or failed hence the 0 or 1.

Next config line we tried was

[NRPE Handlers]
check_sitesign=cmd /c echo check_sitesign.ps1|powershell.exe -file -

This looked promising and tested fine on the command line but still only returned 0 or 1.

Finally, I stumbled upon this idea

[NRPE Handlers]
check_sitesign=cmd /c echo check_sitesign.ps1; exit($lastexitcode)|powershell.exe -command -

This tells Powershell to run two commands. The first runs the script and the second tells Powershell to exit with the value of the last exit code.

Counting a collection in Powershell

Wednesday 04 of July, 2012

I'm writing a script for AD that will manage for old computer accounts and disable or delete them per our policy. I ran into an interesting issue that I want to document so I remember it.

The query I ran to find old accounts sometimes return 0 objects, sometimes 1, and most times > 1. While testing the code, I noticed that It was having an issue when the return set was zero so I added this code to test for that case:

$oldComps = get-adcomputer -filter ...
if ($oldComps -eq $null) {
        $count = 0 } else {
        $count = ($oldComps).count
        }
    write-host "Processing $count accounts over $daysToDisable for disable"

When this runs I get a nice message saying how many accounts will get processed - unless there is one item returned by the query - in which case $count is blank. Hmmmm.

I went nuts over this for 15 minutes then thought to ask the internet using this query:
"powershell AD collection count one item"
I found an article that suggested the issue was that when one item is returned from the PowerShell command, it is returned as a scalar (single object as everything in Powershell is an object) and not as a collection. The simple fix for this is to wrap the command to force it to return an array
@( ... )

So my code from earlier becomes:

$oldComps = @(get-adcomputer -filter ...<snip>)
if ($oldComps -eq $null) {
        $count = 0 } else {
        $count = ($oldComps).count
        }
    write-host "Processing $count accounts over $daysToDisable for disable"

We can credit this as today's thing learned.

Sending Freecycle posts to Twitter

Sunday 01 of July, 2012

I've been a member of the Upper Bucks Freecycle list for years. I recently realized I missed most of the postings to the list and as a frequent user if Twitter thought there would be handy if the postings were there as well. I had just looked at the If This Then That service ifttt.com and realized I could solve this for the group.

There are a couple simple steps for this. First was to activate two channels, GMail and Twitter. Activation is simple and generally involves authorizing these application to access accounts on your behalf. Ifttt makes use of OAuth when possible so this is generally secure (but use a strong password on your ifttt account, OK?).

For this use, I used my personal GMail account since I already had a FreeCycle subscription, but I set up a twitter account for this specific case (@UBFC_post (cache)).

Ifttt uses the concept of triggers(events) and actions. The trigger for this recipe was a GMail label of UBFC. (I'd already defined a GMail rule to file my UBFC messages).

The action is "Post a new tweet". The contents of the What's happening box are a GMail ingredient {{Subject}} and some extra text to take the reader to the freecycle list ("More Here: http://groups.yahoo.com... #freecycle #ubfc" )

Once you save the recipe, it runs regularly against your mailbox looking for changes and posts a tweet if the action filter matches.

Things I'd like to fix.
- The Freecycle list is closed so I cannot link the reader directly to the message. You must join to see the details
- I really should create an other gmail account for this specific use. Currently if I post something to the list, all of my responses go to Twitter as well since they get the UBFC label



Fixing product activation in Windows XP

Saturday 30 of June, 2012

If the key is being marked as "Incorrect", use the Product key Update Tool (PKUT) to change the key. Download and run PKUT from here (cache) and enter the key from you COA when asked.

Once the system reboots, try activating again using this command-line (paste into run box)

oobe/msoobe /a

Free WiFi for Small Business

Tuesday 05 of June, 2012

I had an interesting problem to solve recently. A small business I sometime work for asked if i could implement free WiFi for their customers. They want to keep implementation cheap. I took a look at their network and found it to be rather insecure. While it needs to be addressed, it was outside the scope of this project. So I started thinking about it and tested a few scenarios (including dd-wr)t and came up with the following. NetGear makes a great little wireless router the WGPS606. Besides broadband routing it offers two wireless SSIDs including a guest wireless. On the guest wireless access can be restricted to only the WAN port of the router. This was key to implementing this securely. The business is using a Comcast connection and the router has very little configuration available. I uplinked the netgear router into the comcast router and most importantly moved all other LAN connections into the new Netgear router so the only connection to the comcast router was the netgear. With this config, traffic for the PCs on the LAN is segregated from traffic on the customer wireless and everything works well. \\Greg "That which we resist the most is what we become"

Powershell for Password Expiration notices

Wednesday 30 of May, 2012

We have a group of folks who only ever remote into our environment and because of that often don't receive password expiration notices from Windows. As luck would have it, often their passwords expire over the weekend and they're locked out until Monday morning. We devised this script to send email notifications in advance of expiration.

First I went spelunking because I'm lazy and I know I'm not the first to need to do this. I found an excellent module in the TechNet Script Repository called Search-ADUserWithExpiringPasswords (cache). It's contributed by Steve Blossom and I thank him for doing the heavy lifting for this project!

The script has one shortcoming. It doesn't allow you to restrict which OUs to search. I've posted a script modification in the Q&A for Steve's upload.

The next step was to wrap some code around the module to do what we needed it to do.

The script starts by setting a few constants and including the Search-ADUserWithExpiringPasswords module.

$smtpserver = "mail.myco.com" 
  $emailFrom = "HelpDesk@myco.com" 
  $HelpDeskTo = "HelpDesk@myco.com"
  $DaysToNotify = "7"
  $SendEmpEmail = $True
  
. C:\NetAdmin\Notify-PasswordExpiration\Search-ADUSerWithExpiringPwd.ps1
Function CPwdLastSet ($pls)
{
	[datetime]::FromFileTimeUTC($pls) 
}


Next, we reset some counters and do the actual search

$logtxt = ""
$count = 0
Search-ADUserWithExpiringPasswords -searchbase "ou=home_employee,ou=user accounts, `
ou=ourOffice,dc=myco,dc=com" -TimeSpan $DaysToNotify `
-Properties mail,PwdLastSet,givenName,sn|`


This command from Steve's module searches the OU specified for passwords expiring in $DaysToNotify (in this case 7) and returns the necessary attributes. Notice that the search command is not terminated but is the beginning of a pipeline to the remainder of the script. The next part of that pipeline processes each returned user object and sends email.

ForEach-Object {
  $today = Get-Date 
  $logdate = Get-Date -format yyyyMMdd 
  $samaccountname = $_.samAccountName 
  $FName = $_.givenName
  $Lname = $_.sn
  $count += 1
  $emailTo = $_.mail  
  
  $passwordLast = cPwdLastSet($_.pwdLastSet) #this is a date now
	$maxAge = (new-object System.TimeSpan((Get-ADObject (Get-ADRootDSE).defaultNamingContext -properties maxPwdAge).maxPwdAge))
	$passwordexpirydate =  $passwordLast.subtract($maxAge)
  $daystoexpiry = ($passwordexpirydate - $today).Days
  $expirationDate = $passwordexpirydate.ToString("D")


This part of the foreach loop calculates the number of days to password expiration, and the expiration date so we can use them in the email message.

$subject = "Your network password will expire soon."     
  $body = "$FName $LName, `n`n" 
  $body += " Your password will expire in $daysToExpiry day(s) on $ExpirationDate.  Please change your password before it expires to ensure you can continue to work. `n`n" 
  $body += "For instruction on how to change your password please refer to this document on the Employee Zone: http://OurSharepoint/SiteDirectory/ee_Info/Shared%20Documents/NetAdmin/HomeworkerPwChange.doc"
  $body += " `n`nIf you are unable to change your password, please contact us at 215 734-2253 `n`n" 
  $body += "Thank you! `n`nYour SysOps Team"
  
  
   #Employee notification
   if ($SendEmpEmail) {
	#Send-MailMessage -To $emailTo -From $emailFrom -cc "gmartin@myco.com" -Subject $subject -Body $body  -SmtpServer $smtpserver 
	Send-MailMessage -To $emailTo -From $emailFrom -Subject $subject -Body $body  -SmtpServer $smtpserver 
	}

   $logtxt += $today.ToString("d")
   $logtxt +=" Email was sent to $samAccountName for password expiring on $passwordexpirydate`n" 
   
  }


The next section of the foreach, generates the text for the mail message and sends it using Send_MalMessage.

Finally, we generate a summary message for the helpdesk.

$logtxt += @"
   
   $count employee(s) notified.
   
   This message is sent from a scheduled task called Notify-PasswordExpiration running on ADMON.  The task queries Active Directory 
   for homeworker accounts whose password are expiring in the next 7 days and emails the employee.  It also notifies Help Desk with a summary. 
   No action is generally necessary except by the notified employees.  Please see the Windows Engineering team if you need assistance.

Task home:
\\ADMON\C$\Netadmin\Notify-PasswordExpiration

Last update:
GjM 
May 2012
"@
	#system notification
  #Send-MailMessage -To $emailFrom -From $emailFrom -Subject "Password Expiration notices" -Body $logtxt  -SmtpServer $smtpserver 
  Send-MailMessage -To $HelpDeskTo -From $emailFrom -cc "gmartin@myco.com" -Subject "Password Expiration notices" -Body $logtxt  -SmtpServer $smtpserver


A coupe other notes:

  • We use a job server for running these maintenance tasks. I like to include the UNC to the job so someone can fix it in my absence
  • All of these scripts are signed with a domain-based CA code-signing cert



Leave a comment if you have questions

Linux Backups

Wednesday 25 of April, 2012

Found a backup script and modified it - see below. The script is saved in /etc/cron.weekly and shoud run 4:30 every Sunday or Monday (Day 0) The tape device is /dev/nst0 -

  1. !/bin/bash
  1. Create backups of /etc, /home, /usr/local, and...

PATH=/bin:/usr/bin tape="/dev/nst0" backupdirs="/etc /root /boot /home /usr/local /var/lib /var/log /var/www/htdocs"

  1. Make MySQl Dump
echo "Dumping mysql databases"
mysqldump password=M@ddexit  flush-logs --opt tiki > /usr/local/MySqlDumps/tikidb
echo "Dumping tiki database"
mysqldump password=M@ddexit flush-logs --opt mysql > /usr/local/MySqlDumps/mysqldb
echo "Dumping mysql admin database"
echo "Rewinding tape"
mt -f $tape rewind for path in $backupdirs do echo "System backup on $path" tar cf $tape $path 1>/dev/null sleep 2 done
echo "System backups complete, status: $?"
echo "Now verifying system backups"
mt -f $tape rewind for path in $backupdirs do echo "Verifying $path...." tar tf $tape 1>/dev/null && \ echo "$path: verified"
echo "$path: errors in verify"
if $? -eq 0 then echo "$path: verified"
else
echo "$path: error(s) in verify" 1>&2 fi mt fsf 1 done
mt -f $tape rewoffl
echo "Please remove backup tape" | wall

Amahi File Sharing Issues

Friday 06 of April, 2012

I run Amahi and use the samba file sharing feature to feed data to my WDTV Live. It works great - until this week. We couldn't get to any of the shares. I found a number of issues.

greyhole error
Can't find a vfs module [greyhole]

This was caused by an install problem with greyhole. The link to the greyhole library was missing from /usr/lib/samba/vfs. The fix was reinstalling greyhole. I run an un-official install from greyhole and used this to fix it:

rpm -Uvh --force http://www.greyhole.net/releases/rpm/i386/hda-greyhole-0.9.9-1.`uname -i`.rpm


Next a saw two issues with samba in /var/log/messages. They may have been related:

/var/log/messages
smbd_open_once_socket: open_socket_in: Address already in use

current master browser = UNKNOWN

The fix for this was to disable ipv6 by running:

sysctl net.ipv6.bindv6only=1

I also added this to rc.local so it takes effect at boot time.

Specify a PowerShell Module Manifest

Friday 06 of April, 2012

I loaded the Authenticode (cache) module from PoshCode today. Thanks to Joel Bennett for the module. In order to not type the cert info all the time, I had to specify a module manifest with privateData. here's the command that worked:

New-ModuleManifest snippet
New-ModuleManifest H:\Documents\WindowsPowerShell\Modules\Authenticode\Authenticode.psd1 -Nested H:\Documents\WindowsPowerShell\Modules\Authenticode\Authenticode.psm1  -ModuleToProcess "Authenticode" -Author "gmartin" -Company "MyCo" `
-Copy "2012" -Desc "script signing tools" -Types @() -Formats @() -RequiredMod @() -RequiredAs @() `
-fileList @() -PrivateData AE713D19867XXXXXXXXXX622F4B69DB5F4EE01B2

Webmin add-on

Saturday 11 of February, 2012

I downloaded Webmin as it seems to become somewhat of an MMC for Linux. The install went as planned. Webmin is excellent at guessing what needs to be done, asking if it should and then just working. Webmin is fully extensible and there are hundreds of modules for managing every possible aspect of the system. I was on a hunt to find a mailman module (mailman is a mailing list manager). All references to the existing mailman module says the new version is in development, but I cannot find any real information on it. During my discovery, I found du.wbm which purports to display disk usage information is pretty pie charts. I'm not affraid to get dirty, but graphics are a great way to monitor things. I downloaded and used the webmin interface to install. It found that I did not have the perl GD module and asked to install it. i said yes and had a problem with the make. i received an error "gd.h: file or directory not found". After asking around on LinuxQuestions, DaHammer suggested I run 'locate libgd' to verify that I have libgd installed - duh! (To my credit I had done a find for libgd* and received results. I think there are other modules closely named). At this point I need to download and install the libgd and I'm awaiting a link for the download. \\Greg

Integrating Tiki forums and Mailman

Wednesday 08 of February, 2012
Adding a mailing list to the forums

I want to have the GaughanPA posts to wind up in a forum.
To do so, I will add an alias for the forum (done)
The alias will call a script which processes the message and posts it to the forum
Here's the aliases entry:
ForumTest: "|/etc/mail/mailecho post ForumTest"


Here's the /etc/mail/mailecho script for reading from stdin:

while read msg
do
        echo $msg >> /tmp/mailecho.log
done


Todo next:
Learn enough PHP to put the contents of the message into a database entry for the forum lol

That's enough for tonight
\\Greg

Using Nagios to monitor for WinSCP & SSH Host key check

Friday 27 of January, 2012

I love Nagios because the toolset is so simple and powerful that we can monitor almost anything that has an IP address. We have an automated process that delivers data once a day via secure-FTP using WinSCP. The data is delivered via private link to our parent company and once or twice per year, the SSH host key changes. We never find out until we detect the job is failing. Today when it happened we set out to find a way to test for this condition so we might know ahead of time what's happening..

The solution uses the following:

  • WinSCP console mode
  • Nagios' NSClient++ and an NRPE check
  • Windows shell file


The first issue is getting WinSCP to report back in an automated way that the host key is changing. I did that using this WinSCP command script

option batch on
open auser@theirhost.company.com
pwd
exit

and this WinSCP command line:

winscp.exe /console /script=cmdocc.txt /log=tt.log

The option batch on command tells WinSCP to immediately cancel any input prompt. The open command tells winscp to open a connection using the stored connection specified.

When the "open" is executed successfully and nothing has changed the script then checks the current directory and exits. If the host key has changed, WinSCP prompts to accept it, but the "option batch on" command replies no, the connection fails and the script exists but not before logging the condition to the specified log file.

The final piece is this windows command shell.

@echo off
:: TestHostKey - test the stored ssh host key and reports if it has changed
:: Uses winscp batch script to connect to the appropriate host.  If the host key is different, it will log to a file and exit
:: Script tests for 'key not verified' in outlput log
set WINSCPEXE=\netadmin\winscp\winscp.exe
set WORK=\netadmin\nrpe

cd %WORK%
del tt.log /q

%WINSCPEXE% /console /script=hostchk.txt /log=tt.log
findstr /i /c:"Host key wasn't verified!" tt.log >null

if %errorlevel% NEQ 1 ( 
	echo Host Key does not match
	exit 1
 ) ELSE (
	echo Host key OK
	exit 0	
 )


The script orchestrates the call of WinSCP and after it exits uses findstr to look for "Host key wasn't verified!" in the log file. Based on the results, it sets the exit code and sends an output string to stdout.

This is where Nagios comes in. Nagios uses two pieces of information to monitor a host - the exit code of the check command and the output string. The exit value allows Nagios to decide if the service is healthy and the output string is usually some clear text for the human.

We use NSCLient++. To make this work I had to make the following changes to NSC.ini

  • enable NRPEListener.dll by uncommenting the entry in the modules section
  • set the NRPE port by uncommenting the port line in the NRPE section
  • set use_ssl=1 in the same section
  • Add the following entry to the NRPE Handlers section
check_hostkey=c:\netadmin\nrpe\checkHostKey.cmd

Note: make sure all the other samples are commented out in that section unless you are using them.

On the Nagios server, use the check_nrpe command to issue the check_hostkey against this server on a scheduled basis and tell someone when there is a problem.

This is a bit of a house of cards that took me about 90 minutes to piece together, but the point in all this is to show that with a bit of ingenuity you can put together a solution to test anything with Nagios.

\\Greg

Setting DHCP hostname

Monday 23 of January, 2012

Edit /etc/rc.d/rc.inet1 find and uncomment #DHCP_HOSTNAME="zzzzzzzz" Change it to DHCP_HOSTNAME="hostname" Under slackware, run netconfig to walk through an automated script \\Greg

Open Upload Invitation MIME fixes

Thursday 12 of January, 2012

I've been working to implement a web-based file exchange system for work as an alternative to sending files through email. Early on we identified Open Upload as a good option for this. While the latest release 0.4.2 is stable, it is over a year old and there have been significant changes implemented in svn. It is a much better option.

While trying to build it for deployment, I discovered a flaw in the email notifications for invitations. The problem is the MIME message format is currently incorrect. After some exchange of email with the project lead I added two files invitationNotifytext.tpl and invitationNotifyHtml.tpl that contain the information needed to generate clean email messages.

In addition, I modified the uploadNotifyHtml.tpl and uploadNotifytext.tpl files to fix some minor typos. I'm attaching the files here in case you need them . once they are committed to svn, I'll remove these.

invitationNotifyText.tpl
invitationNotifyHtml.tpl
uploadNotifyText.tpl
uploadNotifyHtml.tpl

Sorting Music Files with Powershell

Wednesday 11 of January, 2012

I'd collected a bunch of music files overtime all in a single directory. I want to sort them into a folder structure arranged at the top level by artist and them by album.

I found a great article by Tobias Weltner here (cache) which is the basis for my script.

The details for music files can be accessed through a Windows Shell object and we can set that up like this:

$path = 'C:\Users\Public\Music\Sample Music\Maid with the Flaxen Hair.mp3'
$shell = New-Object -COMObject Shell.Application
$folder = Split-Path $path
$file = Split-Path $path -Leaf
$shellfolder = $shell.Namespace($folder)
$shellfile = $shellfolder.ParseName($file)

and once we have that we can list the possible attributes with this code

0..287 | Foreach-Object { '{0} = {1}' -f $_, $shellfolder.GetDetailsOf($null, $_) }

Now that you see how to read these Extended attributes, read Tobias' code for the Add-FileDetails function. It enumerates the requested attributes for a given file. I'll not repeat that code here.

What I needed was some code that would

  • enumerate all the files in a directory,
  • find the Artist and album name for each file
  • construct a destination path for each file
  • create any needed folders
  • move the files

Warning - This code is NOT pretty

$sourcepath = ""
   #enumerate fields & gather attributes
dir "$sourcepath*.*" |Add-FileDetails|foreach {
    $mfile = $_
       #build artist level path
    $mpath = "" + $mfile.Ext_Artists
    $mpath = "\Music\" + $mpath

      #build album level path and combine them
    $mAlbum = $mfile.Ext_Album
    $mapath = $mpath + "\" + $mAlbum
    
    write-host $mapath $mfile.name

    if (! (Test-path $mpath)) {
        echo "Creating artist folder: $mpath"
        mkdir $mpath
        }
    if (!(test-path $mapath)) {
        echo "Creating album path $mapath"
        mkdir $mapath
        }

       #assuming the path was created or already existed, move files
    if (Test-path $mapath) {
        Move-item -literalpath $mfile $mapath
        #$mfile.move($mpath)
    } else {
    echo "Unable to create path: $mpath"
    }
  }


(Note: I made one mod to Tobias' code that you may notice. Instead of the attributes prefixed with Extended_, I shortened it to Ext_)

Issues
One thing I had to deal with were special characters in some of the artist names and albums names. I did that be a series of -replace commands that i left out of the code above for readability. Here's a snippet. You may need to recreate this based on your data.

#replaces \, /, :, ;, and ' in names with an appropriate alternative
    $mpath = "" + $mfile.Ext_Artists
    $mpath = $mpath -replace "\\","-"
    $mpath = $mpath -replace("\/","-")
    $mpath = $mpath -replace(":","-")
    $mpath = $mpath -replace(";","-")
    $mpath = $mpath -replace("'","")

Passive Nagios Checks

Wednesday 07 of December, 2011


Had to learn how to submit passive Nagios checks. Here are the steps

  • Define a service or modify a service template to set the directives
passive_checks_enabled 1
active_checks_enabled	0
  • Install and configure nsca service on nagios
  • Install and configure send_nsca utility on the server that needs to submit the check.

  • Write your service check to output the following text to a file called outfile:
Hostname;Service Description;return code;text output
  • execute send_nsca to send the output to the nagios server with this command-line
cat outfile | send_nsca -H nagios_svr_addr -d \; -p 5667

This command sends the file output received from stdin to send_nsc.
Note: the default delimiter for the output file is a tab. I changed it to the semi-colon for simplicity here and set that tion by using -d on the command line. As ; is special to bash, I escaped it with \.

Nagios check_by_ssh configuration

Tuesday 29 of November, 2011


I just spent a couple hours with nagios setting up remote Linux monitoring using check_by_ssh. There are some pitfalls that I discovered that may save you some trouble.

- The local user account that nagios runs under and that the checks will be initiated by, for my install it's nagios, must have a home directory and shell defined. Failure to do so may result in the error: Remote command execution failed: Could not create directory '/.ssh'.

- To complete configuration of the nagios account, login as nagios (I used su - nagios); use ssh to login to the remote host you wish to monitor and successfully cache the remote host fingerprint.

- The account on the remote box used for monitoring must also have a valid shell and home directory defined. Failure to do so may result in a No protocol specified error.

Copying Active Directory OU Structure, Groups and Accts to a Test Domain (CopyAD)

Saturday 25 of June, 2011

I need to copy some of our AD contents into a test domain. This has come up before so I wrote a collection of PowerShell scripts to handle the process.

For our current needs, we need OUs, users, groups and group memberships copied over. I worked this out over a couple days and developed a series of scripts that exports, renames and imports the objects into AD.

The scripts come in 8 parts - 4 export and 4 import. The import scripts must be executed in a particular order so that the necessary parts are available when needed. That order is OUs, Users, groups, group memberships.

The export scripts are interesting because they include a large number of AD attributes for the users yet filter things like the SIDs & passwords so they are save to use from a security perspective. Note too that the user accounts are disabled upon creation. This is easily remedied but left to the scripter.

The import scripts are a bit more complicated as they replace certain attributes with corresponding values from the new domain. Specifically, UPN, DN and mail are fixed up. Also, there's a neat trick played with split to drop off the cn=username portion of the DN so that the OU path for the new object is correct.

Last point. I didn't choose to deal with the Exchange install in my test domain so some of the Exchange=related groups error out during creation.

Find the scripts as the copyAD Suite in the TechNet Script Repository

Expanding a disk in VirtualBox

Sunday 29 of May, 2011


This isn't a problem I should have had to deal with, but I can be stupid. Like other virtualization products,VirtualBox offers a dynamic disk option that allows you to overestimate how large a disk you'll need without penalizing you with wasted disk space. VBox will grow the disk image as you need it rather than pre-allocate the entire disk.

I built a Win7 virtual some time ago and set the C:\ to a max of 20GB. In hindsight, I should have made it 40GB. Today I dealt with it using Clonezilla and Windows 7 disk manager. Here's how.
(Note: After doing all this, I learned about VirtualBox's buil-tin feature to resize a disk using 'VBoxmanage modifyhd'. You may want to look into that first)

Plan

My virtual had two IDE drives (master & slave). The master was the system (c:) disk and the slave a data disk. My plan was to create a new virtual disk; clone the system disk to it and discard the original disk.

Create a new disk

I opened the settings for the virtual and created a new disk. I attached it to a SATA controller since the IDE controller was full. I made the disk dynamic with a max size of 40GB.

Clone the disk

With the new disk attached, I then booted the virtual withe the Clonezilla disk mounted to the virtual from the host optical drive. Clonezilla booted and I followed the instructions to clone from disk to disk. The source disk in my case was sda and the destination sdc. Clonezilla made quick work of the clone and I powered off the system once it had completed.

Reset the drive attachments

Since I already had a master IDE disk attached to the virtual, I couldn't boot the to the new disk. I opened the drive settings again and did a couple things.
-- Dropped the original system disk
-- Detached the data disk and reattached it to SATA along with the new system disk.

Boot and resize the new disk

I then booted the vm on the new system disk which handled without issue. I saw the system partition was still 20GB and this was expected behavior since Clonezilla cloned the partition table and the partition data. I opened Windows Disk Manager and saw that Windows saw the 20GB system partition and 20GB of unallocated space on the disk. I selected the system partition, right-clicked and selected Extend Volume. The wizard that came up stepped me through the process of extending the volume to the remaining unallocated space.

Reboot

While Windows didn't ask for it, I restarted the virtual to ensure everything was healthy.

Sorting Computers by OS with PowerShell

Wednesday 11 of May, 2011

We needed to move a hundred or so computers into different OUs based on their operating system today. We weren't sure how to approach this at first, but a quick search revealed that AD tracks a computer account's Operating System in an attribute called, oddly enough, operatingSystem. With that in mind, we developed the following Powershell commandline to find and move Windows XP acounts.

Sorting Computers with Powershell
get-adcomputer -SearchScope onelevel -searchbase "ou=laptops,ou=technical,ou=workstations,ou=city,dc=ourco,dc=com" -filter 'operatingsystem -like "Windows XP*" ' | move-adobject -targetpath "ou=FDCC,ou=laptops,ou=technical,ou=workstations,ou=city,dc=ourco,dc=com" -passthru

To talk this through a bit, the first part of this is the query to locate computer accounts:
Search
get-adcomputer -SearchScope onelevel 
  -searchbase "ou=laptops,ou=technical,ou=workstations,ou=city,dc=ourco,dc=com" 
   -filter 'operatingsystem -like "Windows XP*" '


This command uses the get-computer cmdlet to

  • search an OU specified by the -Searchbase parameter.
  • The -SearchScope parameter restricts this search to only this OU (not sub OUs)
  • The -Filter "operatingSystem -like "Windows XP*" find computers starting with Windows XP and ends with anything
  • Sends the results down the pipeline

The second half of this command
Move
move-adobject -targetpath "ou=FDCC,ou=laptops,ou=technical,ou=workstations,ou=city,dc=ourco,dc=com" -passthru

uses the Move-ADObject cmdlet to move the AD objects passed through the pipeline to the AD container specified by -Targetpath.

It took us a total of 10 minutes to work through the help command to define the search and move action, test using the -whatif switch and implement. We then repeated the whole thing to search for Windows 7 PCs and move them into a separate container.


Running PS3 Media Server as a non-root service on Fedora and Amahi

Tuesday 29 of March, 2011
<img src='tiki-view_blog_post_image.php?imgId=7' border='0' alt='image' /> <img src='tiki-view_blog_post_image.php?imgId=9' border='0' alt='image' />

Sorry for the detailed title, but this problem has been solved several times so I want to justify why I am solving it again.
I use PS3 Media Server (PMS) on Amahi to stream content to my Sony TV and PS3. I am helping to package the app for other Amahi apps.

In doing so, I was having difficulty running PMS as a non-root user on Amahi and discovered the following:

  • Fedora services use the runuser command to launch a new shell as the specified user and run the specified command in that shell (runuser commandline is in the /etc/init.d/functions script)

  • PMS locates the PMS.conf file within the current directory when it is executed. There is no current way to specify the location of the conf file.

  • When runuser executes a command, it drops the user into their home directory first and executes from there. For the apache user (which Amahi uses for these services), there is no home directory making it even more confusing.


With all this background, the solution was straight forward. We need to change the command we pass to runuser to cd to PMS_HOME then execute the java command. Here's the change I made to pmsd service script

daemon -20 --user $PMSUSER "cd $PMS_HOME && $JAVA $JAVA_OPTS"
The key being the addition of cd $PMS_HOME && brior to executing the java command


The entire service script is below:
Note: I did not write this script. Thanks to the Amahi & Fedora comunities for that)

#!/bin/bash
#
#       /etc/rc.d/init.d/pmsd
#
# Starts the PS3 Media Server
#
# chkconfig: 345 70 80
# description: PS3 Media Server
# processname: java

### BEGIN INIT INFO
# Provides: pmsd
# Required-Start: $syslog $local_fs
# Required-Stop: $syslog $local_fs
# Default-Start:  3 4 5
# Default-Stop: 0 1 6
# Short-Description: start and stop pmsd
# Description: PS3 Media Server
### END INIT INFO

#PMSUSER=pmsd
PMSUSER=apache
PMSGROUP=users
JAVA=`which java`

PMS_HOME="/var/hda/web-apps/ps3mediaserver/html"
PMS_JAR="$PMS_HOME/pms.jar"
PMS_JARS="$PMS_HOME/update.jar:$PMS_HOME/pms.jar:$PMS_HOME/plugins/*"
JAVA_OPTS="-Xmx768M -Xss16M -Djava.encoding=UTF-8 -Djava.net.preferIPv4Stack=true -classpath $PMS_JARS net.pms.PMS -Djava.awt.headless=true $@ >>/var/log/pmsd.log 2>>/var/log/pmsd.log &"	

PMSDPID=/var/run/pmsd.pid

export PMS_HOME

# Source function library.
. /etc/rc.d/init.d/functions

RETVAL=0

start() {

        # Check if pms is already running
        if [ ! -f /var/lock/subsys/pmsd ]; then
            echo -n $"Starting PMS daemon: "
            daemon -20 --user $PMSUSER "cd $PMS_HOME && $JAVA $JAVA_OPTS"
            RETVAL=$?
            [ $RETVAL -eq 0 ] && touch /var/lock/subsys/pmsd
            echo
        fi
        return $RETVAL
}

stop() {

        echo -n $"Stopping PMS daemon: "
        killproc $JAVA
        RETVAL=$?
        [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/pmsd
        echo
    return $RETVAL
}


restart() {
        stop
        start
}

case "$1" in
start)
        start
        ;;
stop)
        stop
        ;;
restart)
        restart
        ;;
status)
        status $JAVA
        RETVAL=$?
        ;;
*)
        echo $"Usage: $0 {start|stop|status|restart}"
        RETVAL=2
esac

exit $RETVAL

Restoring a DFS root

Sunday 13 of March, 2011

We make heavy use of Microsoft's file share virtualization technology - Distributed File System (DFS). Today, one of our root DFS shares got deleted and we had to scramble to get it back. Here's what we tried and what worked.

Since the object seemed to reside in AD, under the CN=System container in our domain AD, the first thing we tried was an Active Directory undelete. A recycle bin was added to AD in Win2k8. We tried several tools and methods to restore the object. Here's a list

Each of these failed with a similar error. It appears that the AD object had some key attributes removed when it was deleted and so the object in the deleted items container was not a valid AD object (and hence would not restore. My guess is that Microsoft has not designed all AD object deletions with restoration in mind.

So here's what we did that worked

  1. We restored one of our virtualized DCs to a new VM with no network connection
  2. Since the DFS root was not on this DC, we created an identical DFS root on that DC
  3. AD magically repopulated the DFS shares that were configured below the deleted root. We suspect this because the DC's AD still thought it existed
  4. Exported the configuration using dfsutil
  5. Shut down the vm and opend the VHD so we could copy out the files dfsutil created
  6. Edited the DFSUtil output to remove the entry for the new DC
  7. Imported the dfs config using dfsutil with the /Set switch
  8. Tested


Lesson learned:

  • We are considering a scheduled task to export the DFS config using dfsutil
  • We set the "Protect object from accidental deletion" on each of the DFS objects in AD


Note: I doubt this is a Microsoft approved solution, so, YMMV.

If you have thoughts on this, leave a comment here or on Twitter (I'm @uSlacker)