Greg's Tech blog

My technical journal where I record my challenges with Linux, open source SW, Tiki, PowerShell, Brewing beer, AD, LDAP and more...

Powershell for Password Expiration notices

Wednesday 30 of May, 2012

We have a group of folks who only ever remote into our environment and because of that often don't receive password expiration notices from Windows. As luck would have it, often their passwords expire over the weekend and they're locked out until Monday morning. We devised this script to send email notifications in advance of expiration.

First I went spelunking because I'm lazy and I know I'm not the first to need to do this. I found an excellent module in the TechNet Script Repository called Search-ADUserWithExpiringPasswords (cache). It's contributed by Steve Blossom and I thank him for doing the heavy lifting for this project!

The script has one shortcoming. It doesn't allow you to restrict which OUs to search. I've posted a script modification in the Q&A for Steve's upload.

The next step was to wrap some code around the module to do what we needed it to do.

The script starts by setting a few constants and including the Search-ADUserWithExpiringPasswords module.

$smtpserver = "mail.myco.com" 
  $emailFrom = "HelpDesk@myco.com" 
  $HelpDeskTo = "HelpDesk@myco.com"
  $DaysToNotify = "7"
  $SendEmpEmail = $True
. C:\NetAdmin\Notify-PasswordExpiration\Search-ADUSerWithExpiringPwd.ps1
Function CPwdLastSet ($pls)

Next, we reset some counters and do the actual search

$logtxt = ""
$count = 0
Search-ADUserWithExpiringPasswords -searchbase "ou=home_employee,ou=user accounts, `
ou=ourOffice,dc=myco,dc=com" -TimeSpan $DaysToNotify `
-Properties mail,PwdLastSet,givenName,sn|`

This command from Steve's module searches the OU specified for passwords expiring in $DaysToNotify (in this case 7) and returns the necessary attributes. Notice that the search command is not terminated but is the beginning of a pipeline to the remainder of the script. The next part of that pipeline processes each returned user object and sends email.

ForEach-Object {
  $today = Get-Date 
  $logdate = Get-Date -format yyyyMMdd 
  $samaccountname = $_.samAccountName 
  $FName = $_.givenName
  $Lname = $_.sn
  $count += 1
  $emailTo = $_.mail  
  $passwordLast = cPwdLastSet($_.pwdLastSet) #this is a date now
	$maxAge = (new-object System.TimeSpan((Get-ADObject (Get-ADRootDSE).defaultNamingContext -properties maxPwdAge).maxPwdAge))
	$passwordexpirydate =  $passwordLast.subtract($maxAge)
  $daystoexpiry = ($passwordexpirydate - $today).Days
  $expirationDate = $passwordexpirydate.ToString("D")

This part of the foreach loop calculates the number of days to password expiration, and the expiration date so we can use them in the email message.

$subject = "Your network password will expire soon."     
  $body = "$FName $LName, `n`n" 
  $body += " Your password will expire in $daysToExpiry day(s) on $ExpirationDate.  Please change your password before it expires to ensure you can continue to work. `n`n" 
  $body += "For instruction on how to change your password please refer to this document on the Employee Zone: http://OurSharepoint/SiteDirectory/ee_Info/Shared%20Documents/NetAdmin/HomeworkerPwChange.doc"
  $body += " `n`nIf you are unable to change your password, please contact us at 215 734-2253 `n`n" 
  $body += "Thank you! `n`nYour SysOps Team"
   #Employee notification
   if ($SendEmpEmail) {
	#Send-MailMessage -To $emailTo -From $emailFrom -cc "gmartin@myco.com" -Subject $subject -Body $body  -SmtpServer $smtpserver 
	Send-MailMessage -To $emailTo -From $emailFrom -Subject $subject -Body $body  -SmtpServer $smtpserver 

   $logtxt += $today.ToString("d")
   $logtxt +=" Email was sent to $samAccountName for password expiring on $passwordexpirydate`n" 

The next section of the foreach, generates the text for the mail message and sends it using Send_MalMessage.

Finally, we generate a summary message for the helpdesk.

$logtxt += @"
   $count employee(s) notified.
   This message is sent from a scheduled task called Notify-PasswordExpiration running on ADMON.  The task queries Active Directory 
   for homeworker accounts whose password are expiring in the next 7 days and emails the employee.  It also notifies Help Desk with a summary. 
   No action is generally necessary except by the notified employees.  Please see the Windows Engineering team if you need assistance.

Task home:

Last update:
May 2012
	#system notification
  #Send-MailMessage -To $emailFrom -From $emailFrom -Subject "Password Expiration notices" -Body $logtxt  -SmtpServer $smtpserver 
  Send-MailMessage -To $HelpDeskTo -From $emailFrom -cc "gmartin@myco.com" -Subject "Password Expiration notices" -Body $logtxt  -SmtpServer $smtpserver

A coupe other notes:

  • We use a job server for running these maintenance tasks. I like to include the UNC to the job so someone can fix it in my absence
  • All of these scripts are signed with a domain-based CA code-signing cert

Leave a comment if you have questions

Linux Backups

Wednesday 25 of April, 2012

Found a backup script and modified it - see below. The script is saved in /etc/cron.weekly and shoud run 4:30 every Sunday or Monday (Day 0) The tape device is /dev/nst0 -

  1. !/bin/bash
  1. Create backups of /etc, /home, /usr/local, and...

PATH=/bin:/usr/bin tape="/dev/nst0" backupdirs="/etc /root /boot /home /usr/local /var/lib /var/log /var/www/htdocs"

  1. Make MySQl Dump
echo "Dumping mysql databases"
mysqldump password=M@ddexit  flush-logs --opt tiki > /usr/local/MySqlDumps/tikidb
echo "Dumping tiki database"
mysqldump password=M@ddexit flush-logs --opt mysql > /usr/local/MySqlDumps/mysqldb
echo "Dumping mysql admin database"
echo "Rewinding tape"
mt -f $tape rewind for path in $backupdirs do echo "System backup on $path" tar cf $tape $path 1>/dev/null sleep 2 done
echo "System backups complete, status: $?"
echo "Now verifying system backups"
mt -f $tape rewind for path in $backupdirs do echo "Verifying $path...." tar tf $tape 1>/dev/null && \ echo "$path: verified"
echo "$path: errors in verify"
if $? -eq 0 then echo "$path: verified"
echo "$path: error(s) in verify" 1>&2 fi mt fsf 1 done
mt -f $tape rewoffl
echo "Please remove backup tape" | wall

Amahi File Sharing Issues

Friday 06 of April, 2012

I run Amahi and use the samba file sharing feature to feed data to my WDTV Live. It works great - until this week. We couldn't get to any of the shares. I found a number of issues.

greyhole error
Can't find a vfs module [greyhole]

This was caused by an install problem with greyhole. The link to the greyhole library was missing from /usr/lib/samba/vfs. The fix was reinstalling greyhole. I run an un-official install from greyhole and used this to fix it:

rpm -Uvh --force http://www.greyhole.net/releases/rpm/i386/hda-greyhole-0.9.9-1.`uname -i`.rpm

Next a saw two issues with samba in /var/log/messages. They may have been related:

smbd_open_once_socket: open_socket_in: Address already in use

current master browser = UNKNOWN

The fix for this was to disable ipv6 by running:

sysctl net.ipv6.bindv6only=1

I also added this to rc.local so it takes effect at boot time.

Specify a PowerShell Module Manifest

Friday 06 of April, 2012

I loaded the Authenticode (cache) module from PoshCode today. Thanks to Joel Bennett for the module. In order to not type the cert info all the time, I had to specify a module manifest with privateData. here's the command that worked:

New-ModuleManifest snippet
New-ModuleManifest H:\Documents\WindowsPowerShell\Modules\Authenticode\Authenticode.psd1 -Nested H:\Documents\WindowsPowerShell\Modules\Authenticode\Authenticode.psm1  -ModuleToProcess "Authenticode" -Author "gmartin" -Company "MyCo" `
-Copy "2012" -Desc "script signing tools" -Types @() -Formats @() -RequiredMod @() -RequiredAs @() `
-fileList @() -PrivateData AE713D19867XXXXXXXXXX622F4B69DB5F4EE01B2

Webmin add-on

Saturday 11 of February, 2012

I downloaded Webmin as it seems to become somewhat of an MMC for Linux. The install went as planned. Webmin is excellent at guessing what needs to be done, asking if it should and then just working. Webmin is fully extensible and there are hundreds of modules for managing every possible aspect of the system. I was on a hunt to find a mailman module (mailman is a mailing list manager). All references to the existing mailman module says the new version is in development, but I cannot find any real information on it. During my discovery, I found du.wbm which purports to display disk usage information is pretty pie charts. I'm not affraid to get dirty, but graphics are a great way to monitor things. I downloaded and used the webmin interface to install. It found that I did not have the perl GD module and asked to install it. i said yes and had a problem with the make. i received an error "gd.h: file or directory not found". After asking around on LinuxQuestions, DaHammer suggested I run 'locate libgd' to verify that I have libgd installed - duh! (To my credit I had done a find for libgd* and received results. I think there are other modules closely named). At this point I need to download and install the libgd and I'm awaiting a link for the download. \\Greg

Integrating Tiki forums and Mailman

Wednesday 08 of February, 2012
Adding a mailing list to the forums

I want to have the GaughanPA posts to wind up in a forum.
To do so, I will add an alias for the forum (done)
The alias will call a script which processes the message and posts it to the forum
Here's the aliases entry:
ForumTest: "|/etc/mail/mailecho post ForumTest"

Here's the /etc/mail/mailecho script for reading from stdin:

while read msg
        echo $msg >> /tmp/mailecho.log

Todo next:
Learn enough PHP to put the contents of the message into a database entry for the forum lol

That's enough for tonight

Using Nagios to monitor for WinSCP & SSH Host key check

Friday 27 of January, 2012

I love Nagios because the toolset is so simple and powerful that we can monitor almost anything that has an IP address. We have an automated process that delivers data once a day via secure-FTP using WinSCP. The data is delivered via private link to our parent company and once or twice per year, the SSH host key changes. We never find out until we detect the job is failing. Today when it happened we set out to find a way to test for this condition so we might know ahead of time what's happening..

The solution uses the following:

  • WinSCP console mode
  • Nagios' NSClient++ and an NRPE check
  • Windows shell file

The first issue is getting WinSCP to report back in an automated way that the host key is changing. I did that using this WinSCP command script

option batch on
open auser@theirhost.company.com

and this WinSCP command line:

winscp.exe /console /script=cmdocc.txt /log=tt.log

The option batch on command tells WinSCP to immediately cancel any input prompt. The open command tells winscp to open a connection using the stored connection specified.

When the "open" is executed successfully and nothing has changed the script then checks the current directory and exits. If the host key has changed, WinSCP prompts to accept it, but the "option batch on" command replies no, the connection fails and the script exists but not before logging the condition to the specified log file.

The final piece is this windows command shell.

@echo off
:: TestHostKey - test the stored ssh host key and reports if it has changed
:: Uses winscp batch script to connect to the appropriate host.  If the host key is different, it will log to a file and exit
:: Script tests for 'key not verified' in outlput log
set WINSCPEXE=\netadmin\winscp\winscp.exe
set WORK=\netadmin\nrpe

cd %WORK%
del tt.log /q

%WINSCPEXE% /console /script=hostchk.txt /log=tt.log
findstr /i /c:"Host key wasn't verified!" tt.log >null

if %errorlevel% NEQ 1 ( 
	echo Host Key does not match
	exit 1
 ) ELSE (
	echo Host key OK
	exit 0	

The script orchestrates the call of WinSCP and after it exits uses findstr to look for "Host key wasn't verified!" in the log file. Based on the results, it sets the exit code and sends an output string to stdout.

This is where Nagios comes in. Nagios uses two pieces of information to monitor a host - the exit code of the check command and the output string. The exit value allows Nagios to decide if the service is healthy and the output string is usually some clear text for the human.

We use NSCLient++. To make this work I had to make the following changes to NSC.ini

  • enable NRPEListener.dll by uncommenting the entry in the modules section
  • set the NRPE port by uncommenting the port line in the NRPE section
  • set use_ssl=1 in the same section
  • Add the following entry to the NRPE Handlers section

Note: make sure all the other samples are commented out in that section unless you are using them.

On the Nagios server, use the check_nrpe command to issue the check_hostkey against this server on a scheduled basis and tell someone when there is a problem.

This is a bit of a house of cards that took me about 90 minutes to piece together, but the point in all this is to show that with a bit of ingenuity you can put together a solution to test anything with Nagios.


Setting DHCP hostname

Monday 23 of January, 2012

Edit /etc/rc.d/rc.inet1 find and uncomment #DHCP_HOSTNAME="zzzzzzzz" Change it to DHCP_HOSTNAME="hostname" Under slackware, run netconfig to walk through an automated script \\Greg

Open Upload Invitation MIME fixes

Thursday 12 of January, 2012

I've been working to implement a web-based file exchange system for work as an alternative to sending files through email. Early on we identified Open Upload as a good option for this. While the latest release 0.4.2 is stable, it is over a year old and there have been significant changes implemented in svn. It is a much better option.

While trying to build it for deployment, I discovered a flaw in the email notifications for invitations. The problem is the MIME message format is currently incorrect. After some exchange of email with the project lead I added two files invitationNotifytext.tpl and invitationNotifyHtml.tpl that contain the information needed to generate clean email messages.

In addition, I modified the uploadNotifyHtml.tpl and uploadNotifytext.tpl files to fix some minor typos. I'm attaching the files here in case you need them . once they are committed to svn, I'll remove these.


Sorting Music Files with Powershell

Wednesday 11 of January, 2012

I'd collected a bunch of music files overtime all in a single directory. I want to sort them into a folder structure arranged at the top level by artist and them by album.

I found a great article by Tobias Weltner here (cache) which is the basis for my script.

The details for music files can be accessed through a Windows Shell object and we can set that up like this:

$path = 'C:\Users\Public\Music\Sample Music\Maid with the Flaxen Hair.mp3'
$shell = New-Object -COMObject Shell.Application
$folder = Split-Path $path
$file = Split-Path $path -Leaf
$shellfolder = $shell.Namespace($folder)
$shellfile = $shellfolder.ParseName($file)

and once we have that we can list the possible attributes with this code

0..287 | Foreach-Object { '{0} = {1}' -f $_, $shellfolder.GetDetailsOf($null, $_) }

Now that you see how to read these Extended attributes, read Tobias' code for the Add-FileDetails function. It enumerates the requested attributes for a given file. I'll not repeat that code here.

What I needed was some code that would

  • enumerate all the files in a directory,
  • find the Artist and album name for each file
  • construct a destination path for each file
  • create any needed folders
  • move the files

Warning - This code is NOT pretty

$sourcepath = ""
   #enumerate fields & gather attributes
dir "$sourcepath*.*" |Add-FileDetails|foreach {
    $mfile = $_
       #build artist level path
    $mpath = "" + $mfile.Ext_Artists
    $mpath = "\Music\" + $mpath

      #build album level path and combine them
    $mAlbum = $mfile.Ext_Album
    $mapath = $mpath + "\" + $mAlbum
    write-host $mapath $mfile.name

    if (! (Test-path $mpath)) {
        echo "Creating artist folder: $mpath"
        mkdir $mpath
    if (!(test-path $mapath)) {
        echo "Creating album path $mapath"
        mkdir $mapath

       #assuming the path was created or already existed, move files
    if (Test-path $mapath) {
        Move-item -literalpath $mfile $mapath
    } else {
    echo "Unable to create path: $mpath"

(Note: I made one mod to Tobias' code that you may notice. Instead of the attributes prefixed with Extended_, I shortened it to Ext_)

One thing I had to deal with were special characters in some of the artist names and albums names. I did that be a series of -replace commands that i left out of the code above for readability. Here's a snippet. You may need to recreate this based on your data.

#replaces \, /, :, ;, and ' in names with an appropriate alternative
    $mpath = "" + $mfile.Ext_Artists
    $mpath = $mpath -replace "\\","-"
    $mpath = $mpath -replace("\/","-")
    $mpath = $mpath -replace(":","-")
    $mpath = $mpath -replace(";","-")
    $mpath = $mpath -replace("'","")

Passive Nagios Checks

Wednesday 07 of December, 2011

Had to learn how to submit passive Nagios checks. Here are the steps

  • Define a service or modify a service template to set the directives
passive_checks_enabled 1
active_checks_enabled	0
  • Install and configure nsca service on nagios
  • Install and configure send_nsca utility on the server that needs to submit the check.

  • Write your service check to output the following text to a file called outfile:
Hostname;Service Description;return code;text output
  • execute send_nsca to send the output to the nagios server with this command-line
cat outfile | send_nsca -H nagios_svr_addr -d \; -p 5667

This command sends the file output received from stdin to send_nsc.
Note: the default delimiter for the output file is a tab. I changed it to the semi-colon for simplicity here and set that tion by using -d on the command line. As ; is special to bash, I escaped it with \.

Nagios check_by_ssh configuration

Tuesday 29 of November, 2011

I just spent a couple hours with nagios setting up remote Linux monitoring using check_by_ssh. There are some pitfalls that I discovered that may save you some trouble.

- The local user account that nagios runs under and that the checks will be initiated by, for my install it's nagios, must have a home directory and shell defined. Failure to do so may result in the error: Remote command execution failed: Could not create directory '/.ssh'.

- To complete configuration of the nagios account, login as nagios (I used su - nagios); use ssh to login to the remote host you wish to monitor and successfully cache the remote host fingerprint.

- The account on the remote box used for monitoring must also have a valid shell and home directory defined. Failure to do so may result in a No protocol specified error.

Copying Active Directory OU Structure, Groups and Accts to a Test Domain (CopyAD)

Saturday 25 of June, 2011

I need to copy some of our AD contents into a test domain. This has come up before so I wrote a collection of PowerShell scripts to handle the process.

For our current needs, we need OUs, users, groups and group memberships copied over. I worked this out over a couple days and developed a series of scripts that exports, renames and imports the objects into AD.

The scripts come in 8 parts - 4 export and 4 import. The import scripts must be executed in a particular order so that the necessary parts are available when needed. That order is OUs, Users, groups, group memberships.

The export scripts are interesting because they include a large number of AD attributes for the users yet filter things like the SIDs & passwords so they are save to use from a security perspective. Note too that the user accounts are disabled upon creation. This is easily remedied but left to the scripter.

The import scripts are a bit more complicated as they replace certain attributes with corresponding values from the new domain. Specifically, UPN, DN and mail are fixed up. Also, there's a neat trick played with split to drop off the cn=username portion of the DN so that the OU path for the new object is correct.

Last point. I didn't choose to deal with the Exchange install in my test domain so some of the Exchange=related groups error out during creation.

Find the scripts as the copyAD Suite in the TechNet Script Repository

Expanding a disk in VirtualBox

Sunday 29 of May, 2011

This isn't a problem I should have had to deal with, but I can be stupid. Like other virtualization products,VirtualBox offers a dynamic disk option that allows you to overestimate how large a disk you'll need without penalizing you with wasted disk space. VBox will grow the disk image as you need it rather than pre-allocate the entire disk.

I built a Win7 virtual some time ago and set the C:\ to a max of 20GB. In hindsight, I should have made it 40GB. Today I dealt with it using Clonezilla and Windows 7 disk manager. Here's how.
(Note: After doing all this, I learned about VirtualBox's buil-tin feature to resize a disk using 'VBoxmanage modifyhd'. You may want to look into that first)


My virtual had two IDE drives (master & slave). The master was the system (c:) disk and the slave a data disk. My plan was to create a new virtual disk; clone the system disk to it and discard the original disk.

Create a new disk

I opened the settings for the virtual and created a new disk. I attached it to a SATA controller since the IDE controller was full. I made the disk dynamic with a max size of 40GB.

Clone the disk

With the new disk attached, I then booted the virtual withe the Clonezilla disk mounted to the virtual from the host optical drive. Clonezilla booted and I followed the instructions to clone from disk to disk. The source disk in my case was sda and the destination sdc. Clonezilla made quick work of the clone and I powered off the system once it had completed.

Reset the drive attachments

Since I already had a master IDE disk attached to the virtual, I couldn't boot the to the new disk. I opened the drive settings again and did a couple things.
-- Dropped the original system disk
-- Detached the data disk and reattached it to SATA along with the new system disk.

Boot and resize the new disk

I then booted the vm on the new system disk which handled without issue. I saw the system partition was still 20GB and this was expected behavior since Clonezilla cloned the partition table and the partition data. I opened Windows Disk Manager and saw that Windows saw the 20GB system partition and 20GB of unallocated space on the disk. I selected the system partition, right-clicked and selected Extend Volume. The wizard that came up stepped me through the process of extending the volume to the remaining unallocated space.


While Windows didn't ask for it, I restarted the virtual to ensure everything was healthy.

Sorting Computers by OS with PowerShell

Wednesday 11 of May, 2011

We needed to move a hundred or so computers into different OUs based on their operating system today. We weren't sure how to approach this at first, but a quick search revealed that AD tracks a computer account's Operating System in an attribute called, oddly enough, operatingSystem. With that in mind, we developed the following Powershell commandline to find and move Windows XP acounts.

Sorting Computers with Powershell
get-adcomputer -SearchScope onelevel -searchbase "ou=laptops,ou=technical,ou=workstations,ou=city,dc=ourco,dc=com" -filter 'operatingsystem -like "Windows XP*" ' | move-adobject -targetpath "ou=FDCC,ou=laptops,ou=technical,ou=workstations,ou=city,dc=ourco,dc=com" -passthru

To talk this through a bit, the first part of this is the query to locate computer accounts:
get-adcomputer -SearchScope onelevel 
  -searchbase "ou=laptops,ou=technical,ou=workstations,ou=city,dc=ourco,dc=com" 
   -filter 'operatingsystem -like "Windows XP*" '

This command uses the get-computer cmdlet to

  • search an OU specified by the -Searchbase parameter.
  • The -SearchScope parameter restricts this search to only this OU (not sub OUs)
  • The -Filter "operatingSystem -like "Windows XP*" find computers starting with Windows XP and ends with anything
  • Sends the results down the pipeline

The second half of this command
move-adobject -targetpath "ou=FDCC,ou=laptops,ou=technical,ou=workstations,ou=city,dc=ourco,dc=com" -passthru

uses the Move-ADObject cmdlet to move the AD objects passed through the pipeline to the AD container specified by -Targetpath.

It took us a total of 10 minutes to work through the help command to define the search and move action, test using the -whatif switch and implement. We then repeated the whole thing to search for Windows 7 PCs and move them into a separate container.

Running PS3 Media Server as a non-root service on Fedora and Amahi

Tuesday 29 of March, 2011
<img src='tiki-view_blog_post_image.php?imgId=7' border='0' alt='image' /> <img src='tiki-view_blog_post_image.php?imgId=9' border='0' alt='image' />

Sorry for the detailed title, but this problem has been solved several times so I want to justify why I am solving it again.
I use PS3 Media Server (PMS) on Amahi to stream content to my Sony TV and PS3. I am helping to package the app for other Amahi apps.

In doing so, I was having difficulty running PMS as a non-root user on Amahi and discovered the following:

  • Fedora services use the runuser command to launch a new shell as the specified user and run the specified command in that shell (runuser commandline is in the /etc/init.d/functions script)

  • PMS locates the PMS.conf file within the current directory when it is executed. There is no current way to specify the location of the conf file.

  • When runuser executes a command, it drops the user into their home directory first and executes from there. For the apache user (which Amahi uses for these services), there is no home directory making it even more confusing.

With all this background, the solution was straight forward. We need to change the command we pass to runuser to cd to PMS_HOME then execute the java command. Here's the change I made to pmsd service script

daemon -20 --user $PMSUSER "cd $PMS_HOME && $JAVA $JAVA_OPTS"
The key being the addition of cd $PMS_HOME && brior to executing the java command

The entire service script is below:
Note: I did not write this script. Thanks to the Amahi & Fedora comunities for that)

#       /etc/rc.d/init.d/pmsd
# Starts the PS3 Media Server
# chkconfig: 345 70 80
# description: PS3 Media Server
# processname: java

# Provides: pmsd
# Required-Start: $syslog $local_fs
# Required-Stop: $syslog $local_fs
# Default-Start:  3 4 5
# Default-Stop: 0 1 6
# Short-Description: start and stop pmsd
# Description: PS3 Media Server

JAVA=`which java`

JAVA_OPTS="-Xmx768M -Xss16M -Djava.encoding=UTF-8 -Djava.net.preferIPv4Stack=true -classpath $PMS_JARS net.pms.PMS -Djava.awt.headless=true $@ >>/var/log/pmsd.log 2>>/var/log/pmsd.log &"	


export PMS_HOME

# Source function library.
. /etc/rc.d/init.d/functions


start() {

        # Check if pms is already running
        if [ ! -f /var/lock/subsys/pmsd ]; then
            echo -n $"Starting PMS daemon: "
            daemon -20 --user $PMSUSER "cd $PMS_HOME && $JAVA $JAVA_OPTS"
            [ $RETVAL -eq 0 ] && touch /var/lock/subsys/pmsd
        return $RETVAL

stop() {

        echo -n $"Stopping PMS daemon: "
        killproc $JAVA
        [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/pmsd
    return $RETVAL

restart() {

case "$1" in
        status $JAVA
        echo $"Usage: $0 {start|stop|status|restart}"

exit $RETVAL

Restoring a DFS root

Sunday 13 of March, 2011

We make heavy use of Microsoft's file share virtualization technology - Distributed File System (DFS). Today, one of our root DFS shares got deleted and we had to scramble to get it back. Here's what we tried and what worked.

Since the object seemed to reside in AD, under the CN=System container in our domain AD, the first thing we tried was an Active Directory undelete. A recycle bin was added to AD in Win2k8. We tried several tools and methods to restore the object. Here's a list

Each of these failed with a similar error. It appears that the AD object had some key attributes removed when it was deleted and so the object in the deleted items container was not a valid AD object (and hence would not restore. My guess is that Microsoft has not designed all AD object deletions with restoration in mind.

So here's what we did that worked

  1. We restored one of our virtualized DCs to a new VM with no network connection
  2. Since the DFS root was not on this DC, we created an identical DFS root on that DC
  3. AD magically repopulated the DFS shares that were configured below the deleted root. We suspect this because the DC's AD still thought it existed
  4. Exported the configuration using dfsutil
  5. Shut down the vm and opend the VHD so we could copy out the files dfsutil created
  6. Edited the DFSUtil output to remove the entry for the new DC
  7. Imported the dfs config using dfsutil with the /Set switch
  8. Tested

Lesson learned:

  • We are considering a scheduled task to export the DFS config using dfsutil
  • We set the "Protect object from accidental deletion" on each of the DFS objects in AD

Note: I doubt this is a Microsoft approved solution, so, YMMV.

If you have thoughts on this, leave a comment here or on Twitter (I'm @uSlacker)

Watching the This Week in Tech (TWiT) network on PS3 Media Server

Thursday 10 of March, 2011

I've been working with the PS3 Media Server (cache) to stream video to my Sony Bravia TV (which is picky about formats). PMS also support web feeds and streams. Since some my favorite podcasts are from the TWiT network (cache), I did some work to make the shows available through PMS.
The WEB.conf file is used to set this up. I started by grabbing the YouTube playlists for each show, but the streams from youtube are slow, stutter and generally suck. Instead, I grabbed the RSS feeds from twit.tv and added the following to my Web.conf

videofeed.Web: Video,TWiT=http://feeds.twit.tv/twig_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/twit_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/ww_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/floss_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/ipad_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/mbw_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/tnt_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/sn_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/dgw_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/tnt_video_large

Next I need to figure out how to stream the live feed. Any idea how to find the H.264 feed?

Update: I found the feed address to stream TWiT Live (cache). Add this line to Web.conf:

videostream.Web: Video,TWiT=TwiTLive,http://bglive-a.bitgravity.com/twit/live/high

Streaming Video to Sony Bravia TV using Amahi

Thursday 10 of March, 2011

(Note: after this writing, I found that Amahi has a beta for PS3 Media Server. It is still untested but might be a better route to take).

I have a Sony Bravia KDL-46Z5100 and Sony PS3. <img src="tiki-view_blog_post_image.php?imgId=5" border="0" alt="image" /><img src="tiki-view_blog_post_image.php?imgId=6" border="0" alt="image" /> Both are capable of streaming via DLNA but they are very picky about the container and video formats they will consume. I want to be able to stream video from my Amahi HDA (cache) server. There are several DLNA apps available for the HDA, but I needed something that transcoded on-the-fly in order to get what I needed.

I had been playing with the <img src="tiki-view_blog_post_image.php?imgId=4" border="0" alt="image" />PS3 Media Server (cache). It is a java-based open source tool that was made to transcode. The project has matured significantly in the past year as the community has stepped in to push the project along after the founder got busy with life. I'd been using PMS on my Slackware server for the better part of '10, but that server was on a wireless connection and results were flaky.
With my Amahi server hardwired in to the network, it became a natural place for the PMS.

I installed PMS by un-tarred the files into a directory (/var/pms for my system). I hacked the service script for another java-based service so I could have it start at boot. That script can be downloaded here. It needs some work as it currently gives an error during shutdown, (but the app stops correctly). Save this as ps3_media_server in the /etc/init.d directory on your HDA. Then run these two commands:

chkconfig ps3_media_server on

service ps3_media_server start​

My next step will be to get this wrapped into a package for Amahi. Let me know if you can make use of this


Active Directory is the killer app for Powershell

Friday 25 of February, 2011
We did something not so smart in AD a few months back. To fix it, we needed to reset a bunch of passwords and clear the passwordneverexpires flag on some 250 accounts. PoSh<a href="tiki-editpage.php?page=PoSh" title="Create page: PoSh" class="wiki wikinew">?</a> to the rescue!<br />  Note: this is a Win2k3 domain that we don't own, so I have to use the Quest AD cmdlets.<br />

 To find the accounts did this &lt;/br&gt;

<div class="code">get-qaduser /path/to/OU -passwordneverexpires|select Name,DN,Samaccountname,passwordneverexpires &lt;/br&gt;| export-csv "C:\temp\file.csv"</div> <p>Next we took the exported csv file and added a column named password and generated a bunch of strong passwords.  What was left was a one-liner to make the changes (Note: I removed the  object definition from the first line of the csv)</p> <div class="code">import-csv "c:\temp\file.csv |foreach { get-qaduser $_.samaccountname | set-qaduser -userpassword $_.password -passwordneverexpires $false }</div> <p> The ease with which import-csv allows you to read in and address the fields of a csv/spreadsheet is incredible.  The way the Quest cmdlets and the MS AD cmdlets allow you to act on multiple accounts at once is powerful.</p> <br />

Amahi Drive Replacement

Sunday 13 of February, 2011
<p>My <a class="wiki external" target="_blank" href="http://www.amahi.org/" rel="external nofollow" style="text-decoration: none; ">Amahi HDA</a>&#160;<a class="wikicache" target="_blank" href="http://linux2.gmartin.org/tiki/lib/fckeditor/editor/tiki-view_cache.php?url=http://www.amahi.org" style="text-decoration: none; ">(cache)</a>&#160;server&#160;was reporting two SMART drive warnings. The boot drive and one of the storage pool drives had more bad sectors than SMART preferred and I've been looking for replacement drives for several weeks. I found a pair of 1TB seagate drives at <a class="wiki external" target="_blank" href="http://newegg.com" rel="external nofollow">NewEgg</a> for $55 ea. Their free shipping cinched the deal.</p> <p>Amahi <br /> My plan was this:</p> <ul> <li>Replace the bad (sata) disk in the pool</li> <li>Move the data to one of the new disks</li> <li>Remove the second drive (sata) form the pool</li> <li>Use this good sata drive to replace the failing ATA boot drive</li> <li>Add the second new sata drive to the pool</li> </ul> <p><br /> The storage pool in Amahi is handled by <a class="wiki external" target="_blank" href="http://code.google.com/p/greyhole/" rel="external nofollow">greyhole</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=http://code.google.com/p/greyhole/">(cache)</a>, a truly ingenious technology similar in idea to Microsoft's now defunct Drive Extender. But, using greyhole makes swapping out a pool drive a non-trivial operation.<br /> <br /> First step in changing a drive is to get the data off the drive. The tricky part was figuring out which physical drive had the error and then which drive that represented in the pool. Fortunately, i had two different drive manufacturers in the pool. Looking at the /dev/disk/by-id info allowed me to determine it was the WD drive in my pool. From there I determined the WD drive was mounted on /var/hda/files/drives/drive4.<br /> <br /> The command for telling greyhole to move the data from this drives is:</p> <div class="code"><tt> greyhole --going=/var/hda/files/drive/drive4/gh</tt></div> <p><br /> Greyhole will tell you to wait while it goes off to move the data. &#160;I had 150GB of files on the disk. It probably ran a hour or so (I didn't time it). Once finished, I could verify with the greyhole -s command that there was no data on the disk.<br /> <br /> Next step is to remove the disk from fstab. I always choose to comment out the line by adding a leading #. That way I could put it back without issue. &#160;Of course, I failed to make the fstab change prior to shutting off and removing the disk. Fedora would not boot afterwards telling me there was a disk error and dropping me to the shell to fix it. Oddly I could not edit fstab from that shell, so I put the disk back in, booted, edited fstab and shutdown again. At this point, all the storage pool data was on a single drive<br /> <br /> Next step was to get the replacement drive in place. I followed the instructions for <a class="wiki external" target="_blank" href="http://wiki.amahi.org/index.php/Adding_a_second_hard_drive_to_your_HDA" rel="external nofollow">adding a second drive to my system</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=http://wiki.amahi.org/index.php/Adding_a_second_hard_drive_to_your_HDA">(cache)</a>. It comes down to partitioning the disk, adding a file system (I use ext4) and mounting it to the right place. &#160;The hda-diskmount command will find the drive once formatted and suggest a location and fstab entry. This is handy and it worked as advertised.<br /> <br /> One thing of note: Amahi mounts drives under /var/hda/files/drives/driveX. It doesn't seem to detect whether old mount points are still in use so it always creates a new mount point. This is safe, but messy. When I added the new 1TB disk, it was assigned to 'drive5' even though there are 3 unused directories already.</p> <p>I added the fstab command and rebooted. I then added the new disk to the storage pool and everything was good. I got to thinking that all the data was still on the first disk and wondered if there was a way to balance the data across drives. A review of the greyhole help and a question in the #amahi IRC channel pointed me to the balance option. I ran</p> <div class="code">greyhole --balance&#160;</div> <p>and monitored the greyhole.log file to see greyhole making a bunch of data moves. It ran for awhile.</p> <p>Once that was complete, I had to tackle replacing the drive that contained the boot partition. &#160;For this, Clonezilla was the obvious choice. &#160;I don't have any experience with the tool. &#160;Since the disks were different sizes (the ata disk was 160GB and sata 250GB),I started with cloning only the Fedora boot and root partitions. This left me with an unbootable sata disk. I then tried a full disk to disk clone and this worked. &#160;I wound up with a 200MB boot partition, a 160GB root and 83GB unallocated space. &#160;The disk worked as expected.<br /> <br /> I wasn't satisfied with the unallocated space, so I turned to gParted Live CD to resize the root partiion to add the additional space. &#160;Again, this was flawless.<br /> <br /> At this point I had only one remaining task - Adding the second 1TB disk into the system and to the strorage pool. &#160;This went off without a hitch. &#160;I did one thing as a test. &#160;I removed the now unused /var/hda/files/drives/drive4 directory and when I ran hda-diskmount, sure enough it allocated that unused directory for the drive mount. &#160;I again ran greyhole to balance the data and when it completed, had ~80GB on each disk.</p> <br /> <br type="_moz" />

Amahi, Fedora & RAID

Sunday 28 of November, 2010

I've recently taken the plunge to convert most of the server duties for my network into an Amahi digital home assistant (cache). The Amahi product is superb and has matured quickly over the past 12-18 months. I had some interesting issues getting started, maybe this will help you.

Amahi is currently built over top Fedora 12. When I built my server, I had two drives in it, 1 ATA & 1 SATA. During the initial install, Fedora built a LVM with the two drives. Knowing I wanted to make use of the drive pooling feature (greyhole, don't ask), I removed the disk from the LVM and got started.

Fedora put /boot and / onto the ATA drive and the system installed fine. Once I had it running, I added a second SATA disk so now I had two 250GB SATA disks I planned to add to the pool. Problem was, I couldn't get a file system on the second SATA disk. I could run cfdisk to delete and create partitions, but when I tried to use mkfs to format the disk, I got an error saying the disk was busy.
I discovered two issues. The first was one of the SATA drives was disabled in bios (Dell Optiplex GX620). Its not clear if this caused an issue, because I was able to access the drives from within the OS

Second issue was Fedora was adding the disks to a RAID group automatically. I'm just starting to understand this, but I used mdadm to remove the disk from the raid group. (Use cat /proc/mdstat to see the name of the devices.) I then used mdadm -zero-superblock /dev/... to prevent the disks from being detected as part of a raid group.

Powershell and a Hash of Custom Objects

Thursday 16 of September, 2010

Like many companies, we regularly import data from our HR system to populate our Active Directory information. We wrote that script several years back using vbscript to read an Excel spreadsheet and populate AD. As we've been learning Powershell, it is time to re-write this task as an exercise in learning and training.

One of the first areas we needed to deal with was populating the address information for our various offices. The HR export contains only minimal location information (Alexandria, Philadelphia, etc) and we want the full street address in AD. To do this in PoSh, I created an associative array (or hash) of location objects that are custom PoSh objects. This article describe how we made this work. I'm sharing this so we can all learn something about PS hashes.

The detailed address info is kept in a static .CSV file with columns for each field (St, zip, city, etc). That is simply read with import CSV like this
{CODE (colors=powershell) }
import-csv $LocFile

Next we need to run through each location and build a custom object containing the address details for each. Building on the import, that code looks like this:

{CODE (colors=powershell) }

import-csv $LocFile| foreach {

$loc= new-object object

$loc |add-member noteproperty site $_.site

$loc |add-member noteproperty street $_.street

$loc |add-member noteproperty city $_.city

$loc |add-member noteproperty state $_.state

$loc |add-member noteproperty zip $_.zip



The first line reads the csv and pipes each row through the foreach command. Each row is a new location, so we create a new object to hold the address - the
{CODE (colors=powershell) }
{CODE} does this. We then add each element of the address to the object using add-member noteproperty.

The last piece of this is associating the location object we create with the site name so we can access the other details directly. This is accomplished by the command:
{CODE (colors=powershell)}

The $sites variable has to be initialized using
{CODE (colors=powershell) }
{CODE} to tell PoSH that this is an array. Then each time the script executes.

{CODE (colors=powershell)}
$sites$_.site=$loc {CODE}
The object is added to the array of these custom objects.

When we process an employees account during the update, we read the employees location from the csv file where we store the city, we then use the city to look up the detailed address information from the array.
{CODE (colors=powershell)}
$EmpAD.l = $Locations$city.city
$EmpAD.st = $Locations$city.state
$EmpAD.streetAddress = $Locations$city.street
$EmpAD.PostalCode = $Locations$city.zip

Powershell and the lastLogonTimestamp

Tuesday 29 of June, 2010

I wrote a query that will find all AD accounts created more than 30 days ago that have Never
logged in or haven't logged in in over 60 days. I used Powershell v2 and the Quest AD module.
Here's the query I started with (reformatted for the reader, but this is a one-liner)

Get-QADuser -searchroot "corp.net/user accounts/users/OurOU" | where 
   ($_.whencreated -lt ((get-date).adddays(-30)) ) 
            ( $_.lastLogonTimestamp -like "Never") 
            ($_.lastLogonTimestamp -lt ((get-date).adddays(-60)) 

When it runs, accounts that have never logged in are listed correctly, account that have been
logged in, generate an error:
"Bad argument to operator "-lt" : Cannot compare "Monday, June 16 2010" because it is not iComparible"
(The error is pointing to the last line of code above)

Since Monday, June 16 2010 looks like a date, I expected it to fail in the comparison to Never, but it fails in comparison to another date.

It turns out the Quest AD snap-in (which is a great tool, BTW), interprets the value of lastLogonTimestamp
to make it display nicely (really, who can understand 175234539836?). What I need is to process
my compare on the raw data. That is accessed by appending .value to the attribute. So the working code look like this:

Get-QADuser -searchroot "corp.net/user accounts/users/OurOU" | where 
   ($_.whencreated -lt ((get-date).adddays(-30)) ) 
            ( $_.lastLogonTimestamp.value -like "Never") 
            ($_.lastLogonTimestamp.value -lt ((get-date).adddays(-60)) 

Now PowerShell can access the real value and transpose between variable types to get me the right answer.