Loading...
 

Greg`s Tech blog

My technical journal where I record my challenges with Linux, open source SW, Tiki, PowerShell, Brewing beer, AD, LDAP and more...

Open Upload Invitation MIME fixes

Thursday 12 of January, 2012
I've been working to implement a web-based file exchange system for work as an alternative to sending files through email. Early on we identified Open Upload as a good option for this. While the latest release 0.4.2 is stable, it is over a year old and there have been significant changes implemented in svn. It is a much better option.

While trying to build it for deployment, I discovered a flaw in the email notifications for invitations. The problem is the MIME message format is currently incorrect. After some exchange of email with the project lead I added two files invitationNotifytext.tpl and invitationNotifyHtml.tpl that contain the information needed to generate clean email messages.

In addition, I modified the uploadNotifyHtml.tpl and uploadNotifytext.tpl files to fix some minor typos. I'm attaching the files here in case you need them . once they are committed to svn, I'll remove these.

invitationNotifyText.tpl
invitationNotifyHtml.tpl
uploadNotifyText.tpl
uploadNotifyHtml.tpl

Sorting Music Files with Powershell

Wednesday 11 of January, 2012
I'd collected a bunch of music files overtime all in a single directory. I want to sort them into a folder structure arranged at the top level by artist and them by album.

I found a great article by Tobias Weltner here (cache) which is the basis for my script.

The details for music files can be accessed through a Windows Shell object and we can set that up like this:

$path = 'C:\Users\Public\Music\Sample Music\Maid with the Flaxen Hair.mp3'
$shell = New-Object -COMObject Shell.Application
$folder = Split-Path $path
$file = Split-Path $path -Leaf
$shellfolder = $shell.Namespace($folder)
$shellfile = $shellfolder.ParseName($file)

and once we have that we can list the possible attributes with this code
0..287 | Foreach-Object { '{0} = {1}' -f $_, $shellfolder.GetDetailsOf($null, $_) }

Now that you see how to read these Extended attributes, read Tobias' code for the Add-FileDetails function. It enumerates the requested attributes for a given file. I'll not repeat that code here.

What I needed was some code that would
  • enumerate all the files in a directory,
  • find the Artist and album name for each file
  • construct a destination path for each file
  • create any needed folders
  • move the files
Warning - This code is NOT pretty

$sourcepath = ""
   #enumerate fields & gather attributes
dir "$sourcepath*.*" |Add-FileDetails|foreach {
    $mfile = $_
       #build artist level path
    $mpath = "" + $mfile.Ext_Artists
    $mpath = "\Music\" + $mpath

      #build album level path and combine them
    $mAlbum = $mfile.Ext_Album
    $mapath = $mpath + "\" + $mAlbum
    
    write-host $mapath $mfile.name

    if (! (Test-path $mpath)) {
        echo "Creating artist folder: $mpath"
        mkdir $mpath
        }
    if (!(test-path $mapath)) {
        echo "Creating album path $mapath"
        mkdir $mapath
        }

       #assuming the path was created or already existed, move files
    if (Test-path $mapath) {
        Move-item -literalpath $mfile $mapath
        #$mfile.move($mpath)
    } else {
    echo "Unable to create path: $mpath"
    }
  }


(Note: I made one mod to Tobias' code that you may notice. Instead of the attributes prefixed with Extended_, I shortened it to Ext_)

Issues
One thing I had to deal with were special characters in some of the artist names and albums names. I did that be a series of -replace commands that i left out of the code above for readability. Here's a snippet. You may need to recreate this based on your data.

#replaces \, /, :, ;, and ' in names with an appropriate alternative
    $mpath = "" + $mfile.Ext_Artists
    $mpath = $mpath -replace "\\","-"
    $mpath = $mpath -replace("\/","-")
    $mpath = $mpath -replace(":","-")
    $mpath = $mpath -replace(";","-")
    $mpath = $mpath -replace("'","")


Passive Nagios Checks

Wednesday 07 of December, 2011

Had to learn how to submit passive Nagios checks. Here are the steps

  • Define a service or modify a service template to set the directives
passive_checks_enabled 1
active_checks_enabled	0

  • Install and configure nsca service on nagios
  • Install and configure send_nsca utility on the server that needs to submit the check.

  • Write your service check to output the following text to a file called outfile:
Hostname;Service Description;return code;text output

  • execute send_nsca to send the output to the nagios server with this command-line
cat outfile | send_nsca -H nagios_svr_addr -d \; -p 5667

This command sends the file output received from stdin to send_nsc.
Note: the default delimiter for the output file is a tab. I changed it to the semi-colon for simplicity here and set that tion by using -d on the command line. As ; is special to bash, I escaped it with \.

Nagios check_by_ssh configuration

Tuesday 29 of November, 2011

I just spent a couple hours with nagios setting up remote Linux monitoring using check_by_ssh. There are some pitfalls that I discovered that may save you some trouble.

- The local user account that nagios runs under and that the checks will be initiated by, for my install it's nagios, must have a home directory and shell defined. Failure to do so may result in the error: Remote command execution failed: Could not create directory '/.ssh'.

- To complete configuration of the nagios account, login as nagios (I used su - nagios); use ssh to login to the remote host you wish to monitor and successfully cache the remote host fingerprint.

- The account on the remote box used for monitoring must also have a valid shell and home directory defined. Failure to do so may result in a No protocol specified error.

Copying Active Directory OU Structure, Groups and Accts to a Test Domain (CopyAD)

Saturday 25 of June, 2011
I need to copy some of our AD contents into a test domain. This has come up before so I wrote a collection of PowerShell scripts to handle the process.

For our current needs, we need OUs, users, groups and group memberships copied over. I worked this out over a couple days and developed a series of scripts that exports, renames and imports the objects into AD.

The scripts come in 8 parts - 4 export and 4 import. The import scripts must be executed in a particular order so that the necessary parts are available when needed. That order is OUs, Users, groups, group memberships.

The export scripts are interesting because they include a large number of AD attributes for the users yet filter things like the SIDs & passwords so they are save to use from a security perspective. Note too that the user accounts are disabled upon creation. This is easily remedied but left to the scripter.

The import scripts are a bit more complicated as they replace certain attributes with corresponding values from the new domain. Specifically, UPN, DN and mail are fixed up. Also, there's a neat trick played with split to drop off the cn=username portion of the DN so that the OU path for the new object is correct.

Last point. I didn't choose to deal with the Exchange install in my test domain so some of the Exchange=related groups error out during creation.

Find the scripts as the copyAD Suite in the TechNet Script Repository

Expanding a disk in VirtualBox

Sunday 29 of May, 2011

This isn't a problem I should have had to deal with, but I can be stupid. Like other virtualization products,VirtualBox offers a dynamic disk option that allows you to overestimate how large a disk you'll need without penalizing you with wasted disk space. VBox will grow the disk image as you need it rather than pre-allocate the entire disk.

I built a Win7 virtual some time ago and set the C:\ to a max of 20GB. In hindsight, I should have made it 40GB. Today I dealt with it using Clonezilla and Windows 7 disk manager. Here's how.
(Note: After doing all this, I learned about VirtualBox's buil-tin feature to resize a disk using 'VBoxmanage modifyhd'. You may want to look into that first)


Plan

My virtual had two IDE drives (master & slave). The master was the system (c:) disk and the slave a data disk. My plan was to create a new virtual disk; clone the system disk to it and discard the original disk.

Create a new disk

I opened the settings for the virtual and created a new disk. I attached it to a SATA controller since the IDE controller was full. I made the disk dynamic with a max size of 40GB.

Clone the disk

With the new disk attached, I then booted the virtual withe the Clonezilla disk mounted to the virtual from the host optical drive. Clonezilla booted and I followed the instructions to clone from disk to disk. The source disk in my case was sda and the destination sdc. Clonezilla made quick work of the clone and I powered off the system once it had completed.

Reset the drive attachments

Since I already had a master IDE disk attached to the virtual, I couldn't boot the to the new disk. I opened the drive settings again and did a couple things.
-- Dropped the original system disk
-- Detached the data disk and reattached it to SATA along with the new system disk.

Boot and resize the new disk

I then booted the vm on the new system disk which handled without issue. I saw the system partition was still 20GB and this was expected behavior since Clonezilla cloned the partition table and the partition data. I opened Windows Disk Manager and saw that Windows saw the 20GB system partition and 20GB of unallocated space on the disk. I selected the system partition, right-clicked and selected Extend Volume. The wizard that came up stepped me through the process of extending the volume to the remaining unallocated space.

Reboot

While Windows didn't ask for it, I restarted the virtual to ensure everything was healthy.


Sorting Computers by OS with PowerShell

Wednesday 11 of May, 2011
We needed to move a hundred or so computers into different OUs based on their operating system today. We weren't sure how to approach this at first, but a quick search revealed that AD tracks a computer account's Operating System in an attribute called, oddly enough, operatingSystem. With that in mind, we developed the following Powershell commandline to find and move Windows XP acounts.

Sorting Computers with Powershell
get-adcomputer -SearchScope onelevel -searchbase "ou=laptops,ou=technical,ou=workstations,ou=city,dc=ourco,dc=com" -filter 'operatingsystem -like "Windows XP*" ' | move-adobject -targetpath "ou=FDCC,ou=laptops,ou=technical,ou=workstations,ou=city,dc=ourco,dc=com" -passthru


To talk this through a bit, the first part of this is the query to locate computer accounts:
Search
get-adcomputer -SearchScope onelevel 
  -searchbase "ou=laptops,ou=technical,ou=workstations,ou=city,dc=ourco,dc=com" 
   -filter 'operatingsystem -like "Windows XP*" '


This command uses the get-computer cmdlet to
  • search an OU specified by the -Searchbase parameter.
  • The -SearchScope parameter restricts this search to only this OU (not sub OUs)
  • The -Filter "operatingSystem -like "Windows XP*" find computers starting with Windows XP and ends with anything
  • Sends the results down the pipeline

The second half of this command
Move
move-adobject -targetpath "ou=FDCC,ou=laptops,ou=technical,ou=workstations,ou=city,dc=ourco,dc=com" -passthru

uses the Move-ADObject cmdlet to move the AD objects passed through the pipeline to the AD container specified by -Targetpath.

It took us a total of 10 minutes to work through the help command to define the search and move action, test using the -whatif switch and implement. We then repeated the whole thing to search for Windows 7 PCs and move them into a separate container.



Running PS3 Media Server as a non-root service on Fedora and Amahi

Tuesday 29 of March, 2011
<img src='tiki-view_blog_post_image.php?imgId=7' border='0' alt='image' /> <img src='tiki-view_blog_post_image.php?imgId=9' border='0' alt='image' />

Sorry for the detailed title, but this problem has been solved several times so I want to justify why I am solving it again.
I use PS3 Media Server (PMS) on Amahi to stream content to my Sony TV and PS3. I am helping to package the app for other Amahi apps.

In doing so, I was having difficulty running PMS as a non-root user on Amahi and discovered the following:

  • Fedora services use the runuser command to launch a new shell as the specified user and run the specified command in that shell (runuser commandline is in the /etc/init.d/functions script)

  • PMS locates the PMS.conf file within the current directory when it is executed. There is no current way to specify the location of the conf file.

  • When runuser executes a command, it drops the user into their home directory first and executes from there. For the apache user (which Amahi uses for these services), there is no home directory making it even more confusing.

With all this background, the solution was straight forward. We need to change the command we pass to runuser to cd to PMS_HOME then execute the java command. Here's the change I made to pmsd service script

daemon -20 --user $PMSUSER "cd $PMS_HOME && $JAVA $JAVA_OPTS"
The key being the addition of cd $PMS_HOME && brior to executing the java command


The entire service script is below:
Note: I did not write this script. Thanks to the Amahi & Fedora comunities for that)
#!/bin/bash
#
#       /etc/rc.d/init.d/pmsd
#
# Starts the PS3 Media Server
#
# chkconfig: 345 70 80
# description: PS3 Media Server
# processname: java

### BEGIN INIT INFO
# Provides: pmsd
# Required-Start: $syslog $local_fs
# Required-Stop: $syslog $local_fs
# Default-Start:  3 4 5
# Default-Stop: 0 1 6
# Short-Description: start and stop pmsd
# Description: PS3 Media Server
### END INIT INFO

#PMSUSER=pmsd
PMSUSER=apache
PMSGROUP=users
JAVA=`which java`

PMS_HOME="/var/hda/web-apps/ps3mediaserver/html"
PMS_JAR="$PMS_HOME/pms.jar"
PMS_JARS="$PMS_HOME/update.jar:$PMS_HOME/pms.jar:$PMS_HOME/plugins/*"
JAVA_OPTS="-Xmx768M -Xss16M -Djava.encoding=UTF-8 -Djava.net.preferIPv4Stack=true -classpath $PMS_JARS net.pms.PMS -Djava.awt.headless=true $@ >>/var/log/pmsd.log 2>>/var/log/pmsd.log &"	

PMSDPID=/var/run/pmsd.pid

export PMS_HOME

# Source function library.
. /etc/rc.d/init.d/functions

RETVAL=0

start() {

        # Check if pms is already running
        if [ ! -f /var/lock/subsys/pmsd ]; then
            echo -n $"Starting PMS daemon: "
            daemon -20 --user $PMSUSER "cd $PMS_HOME && $JAVA $JAVA_OPTS"
            RETVAL=$?
            [ $RETVAL -eq 0 ] && touch /var/lock/subsys/pmsd
            echo
        fi
        return $RETVAL
}

stop() {

        echo -n $"Stopping PMS daemon: "
        killproc $JAVA
        RETVAL=$?
        [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/pmsd
        echo
    return $RETVAL
}


restart() {
        stop
        start
}

case "$1" in
start)
        start
        ;;
stop)
        stop
        ;;
restart)
        restart
        ;;
status)
        status $JAVA
        RETVAL=$?
        ;;
*)
        echo $"Usage: $0 {start|stop|status|restart}"
        RETVAL=2
esac

exit $RETVAL

Restoring a DFS root

Sunday 13 of March, 2011
We make heavy use of Microsoft's file share virtualization technology - Distributed File System (DFS). Today, one of our root DFS shares got deleted and we had to scramble to get it back. Here's what we tried and what worked.

Since the object seemed to reside in AD, under the CN=System container in our domain AD, the first thing we tried was an Active Directory undelete. A recycle bin was added to AD in Win2k8. We tried several tools and methods to restore the object. Here's a list

Each of these failed with a similar error. It appears that the AD object had some key attributes removed when it was deleted and so the object in the deleted items container was not a valid AD object (and hence would not restore. My guess is that Microsoft has not designed all AD object deletions with restoration in mind.

So here's what we did that worked
  1. We restored one of our virtualized DCs to a new VM with no network connection
  2. Since the DFS root was not on this DC, we created an identical DFS root on that DC
  3. AD magically repopulated the DFS shares that were configured below the deleted root. We suspect this because the DC's AD still thought it existed
  4. Exported the configuration using dfsutil
  5. Shut down the vm and opend the VHD so we could copy out the files dfsutil created
  6. Edited the DFSUtil output to remove the entry for the new DC
  7. Imported the dfs config using dfsutil with the /Set switch
  8. Tested

Lesson learned:
  • We are considering a scheduled task to export the DFS config using dfsutil
  • We set the "Protect object from accidental deletion" on each of the DFS objects in AD

Note: I doubt this is a Microsoft approved solution, so, YMMV.

If you have thoughts on this, leave a comment here or on Twitter (I'm @uSlacker)


Watching the This Week in Tech (TWiT) network on PS3 Media Server

Thursday 10 of March, 2011

I've been working with the PS3 Media Server (cache) to stream video to my Sony Bravia TV (which is picky about formats). PMS also support web feeds and streams. Since some my favorite podcasts are from the TWiT network (cache), I did some work to make the shows available through PMS.
The WEB.conf file is used to set this up. I started by grabbing the YouTube playlists for each show, but the streams from youtube are slow, stutter and generally suck. Instead, I grabbed the RSS feeds from twit.tv and added the following to my Web.conf

videofeed.Web: Video,TWiT=http://feeds.twit.tv/twig_video_large
videofeed.Web: Video,TWiT=http://feeds.twit.tv/twit_video_large
videofeed.Web: Video,TWiT=http://feeds.twit.tv/ww_video_large
videofeed.Web: Video,TWiT=http://feeds.twit.tv/floss_video_large
videofeed.Web: Video,TWiT=http://feeds.twit.tv/ipad_video_large
videofeed.Web: Video,TWiT=http://feeds.twit.tv/mbw_video_large
videofeed.Web: Video,TWiT=http://feeds.twit.tv/tnt_video_large
videofeed.Web: Video,TWiT=http://feeds.twit.tv/sn_video_large
videofeed.Web: Video,TWiT=http://feeds.twit.tv/dgw_video_large
videofeed.Web: Video,TWiT=http://feeds.twit.tv/tnt_video_large


Next I need to figure out how to stream the live feed. Any idea how to find the H.264 feed?

Update: I found the feed address to stream TWiT Live (cache). Add this line to Web.conf:
videostream.Web: Video,TWiT=TwiTLive,http://bglive-a.bitgravity.com/twit/live/high

Streaming Video to Sony Bravia TV using Amahi

Thursday 10 of March, 2011
(Note: after this writing, I found that Amahi has a beta for PS3 Media Server. It is still untested but might be a better route to take).

I have a Sony Bravia KDL-46Z5100 and Sony PS3. <img src="tiki-view_blog_post_image.php?imgId=5" border="0" alt="image" /><img src="tiki-view_blog_post_image.php?imgId=6" border="0" alt="image" /> Both are capable of streaming via DLNA but they are very picky about the container and video formats they will consume. I want to be able to stream video from my Amahi HDA (cache) server. There are several DLNA apps available for the HDA, but I needed something that transcoded on-the-fly in order to get what I needed.

I had been playing with the <img src="tiki-view_blog_post_image.php?imgId=4" border="0" alt="image" />PS3 Media Server (cache). It is a java-based open source tool that was made to transcode. The project has matured significantly in the past year as the community has stepped in to push the project along after the founder got busy with life. I'd been using PMS on my Slackware server for the better part of '10, but that server was on a wireless connection and results were flaky.
With my Amahi server hardwired in to the network, it became a natural place for the PMS.

I installed PMS by un-tarred the files into a directory (/var/pms for my system). I hacked the service script for another java-based service so I could have it start at boot. That script can be downloaded here. It needs some work as it currently gives an error during shutdown, (but the app stops correctly). Save this as ps3_media_server in the /etc/init.d directory on your HDA. Then run these two commands:
chkconfig ps3_media_server on
service ps3_media_server start​

My next step will be to get this wrapped into a package for Amahi. Let me know if you can make use of this

\\Greg

Active Directory is the killer app for Powershell

Friday 25 of February, 2011
We did something not so smart in AD a few months back. To fix it, we needed to reset a bunch of passwords and clear the passwordneverexpires flag on some 250 accounts. PoSh<a href="tiki-editpage.php?page=PoSh" title="Create page: PoSh" class="wiki wikinew">?</a> to the rescue!<br />
 Note: this is a Win2k3 domain that we don't own, so I have to use the Quest AD cmdlets.<br />
 To find the accounts did this &lt;/br&gt;
<div class="code">get-qaduser /path/to/OU -passwordneverexpires|select Name,DN,Samaccountname,passwordneverexpires &lt;/br&gt;| export-csv "C:\temp\file.csv"</div>
<p>Next we took the exported csv file and added a column named password and generated a bunch of strong passwords.  What was left was a one-liner to make the changes (Note: I removed the  object definition from the first line of the csv)</p>
<div class="code">import-csv "c:\temp\file.csv |foreach { get-qaduser $_.samaccountname | set-qaduser -userpassword $_.password -passwordneverexpires $false }</div>
<p> The ease with which import-csv allows you to read in and address the fields of a csv/spreadsheet is incredible.  The way the Quest cmdlets and the MS AD cmdlets allow you to act on multiple accounts at once is powerful.</p>
<br />

Amahi Drive Replacement

Sunday 13 of February, 2011
<p>My <a class="wiki external" target="_blank" href="http://www.amahi.org/" rel="external nofollow" style="text-decoration: none; ">Amahi HDA</a>&#160;<a class="wikicache" target="_blank" href="http://linux2.gmartin.org/tiki/lib/fckeditor/editor/tiki-view_cache.php?url=http://www.amahi.org" style="text-decoration: none; ">(cache)</a>&#160;server&#160;was reporting two SMART drive warnings. The boot drive and one of the storage pool drives had more bad sectors than SMART preferred and I've been looking for replacement drives for several weeks. I found a pair of 1TB seagate drives at <a class="wiki external" target="_blank" href="http://newegg.com" rel="external nofollow">NewEgg</a> for $55 ea. Their free shipping cinched the deal.</p>
<p>Amahi <br />
My plan was this:</p>
<ul>
<li>Replace the bad (sata) disk in the pool</li>
<li>Move the data to one of the new disks</li>
<li>Remove the second drive (sata) form the pool</li>
<li>Use this good sata drive to replace the failing ATA boot drive</li>
<li>Add the second new sata drive to the pool</li>
</ul>
<p><br />
The storage pool in Amahi is handled by <a class="wiki external" target="_blank" href="http://code.google.com/p/greyhole/" rel="external nofollow">greyhole</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=http://code.google.com/p/greyhole/">(cache)</a>, a truly ingenious technology similar in idea to Microsoft's now defunct Drive Extender. But, using greyhole makes swapping out a pool drive a non-trivial operation.<br />
<br />
First step in changing a drive is to get the data off the drive. The tricky part was figuring out which physical drive had the error and then which drive that represented in the pool. Fortunately, i had two different drive manufacturers in the pool. Looking at the /dev/disk/by-id info allowed me to determine it was the WD drive in my pool. From there I determined the WD drive was mounted on /var/hda/files/drives/drive4.<br />
<br />
The command for telling greyhole to move the data from this drives is:</p>
<div class="code"><tt> greyhole --going=/var/hda/files/drive/drive4/gh</tt></div>
<p><br />
Greyhole will tell you to wait while it goes off to move the data. &#160;I had 150GB of files on the disk. It probably ran a hour or so (I didn't time it). Once finished, I could verify with the greyhole -s command that there was no data on the disk.<br />
<br />
Next step is to remove the disk from fstab. I always choose to comment out the line by adding a leading #. That way I could put it back without issue. &#160;Of course, I failed to make the fstab change prior to shutting off and removing the disk. Fedora would not boot afterwards telling me there was a disk error and dropping me to the shell to fix it. Oddly I could not edit fstab from that shell, so I put the disk back in, booted, edited fstab and shutdown again. At this point, all the storage pool data was on a single drive<br />
<br />
Next step was to get the replacement drive in place. I followed the instructions for <a class="wiki external" target="_blank" href="http://wiki.amahi.org/index.php/Adding_a_second_hard_drive_to_your_HDA" rel="external nofollow">adding a second drive to my system</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=http://wiki.amahi.org/index.php/Adding_a_second_hard_drive_to_your_HDA">(cache)</a>. It comes down to partitioning the disk, adding a file system (I use ext4) and mounting it to the right place. &#160;The hda-diskmount command will find the drive once formatted and suggest a location and fstab entry. This is handy and it worked as advertised.<br />
<br />
One thing of note: Amahi mounts drives under /var/hda/files/drives/driveX. It doesn't seem to detect whether old mount points are still in use so it always creates a new mount point. This is safe, but messy. When I added the new 1TB disk, it was assigned to 'drive5' even though there are 3 unused directories already.</p>
<p>I added the fstab command and rebooted. I then added the new disk to the storage pool and everything was good. I got to thinking that all the data was still on the first disk and wondered if there was a way to balance the data across drives. A review of the greyhole help and a question in the #amahi IRC channel pointed me to the balance option. I ran</p>
<div class="code">greyhole --balance&#160;</div>
<p>and monitored the greyhole.log file to see greyhole making a bunch of data moves. It ran for awhile.</p>
<p>Once that was complete, I had to tackle replacing the drive that contained the boot partition. &#160;For this, Clonezilla was the obvious choice. &#160;I don't have any experience with the tool. &#160;Since the disks were different sizes (the ata disk was 160GB and sata 250GB),I started with cloning only the Fedora boot and root partitions. This left me with an unbootable sata disk. I then tried a full disk to disk clone and this worked. &#160;I wound up with a 200MB boot partition, a 160GB root and 83GB unallocated space. &#160;The disk worked as expected.<br />
<br />
I wasn't satisfied with the unallocated space, so I turned to gParted Live CD to resize the root partiion to add the additional space. &#160;Again, this was flawless.<br />
<br />
At this point I had only one remaining task - Adding the second 1TB disk into the system and to the strorage pool. &#160;This went off without a hitch. &#160;I did one thing as a test. &#160;I removed the now unused /var/hda/files/drives/drive4 directory and when I ran hda-diskmount, sure enough it allocated that unused directory for the drive mount. &#160;I again ran greyhole to balance the data and when it completed, had ~80GB on each disk.</p>
<br />
<br type="_moz" />

Amahi, Fedora & RAID

Sunday 28 of November, 2010
I've recently taken the plunge to convert most of the server duties for my network into an Amahi digital home assistant (cache). The Amahi product is superb and has matured quickly over the past 12-18 months. I had some interesting issues getting started, maybe this will help you.

Amahi is currently built over top Fedora 12. When I built my server, I had two drives in it, 1 ATA & 1 SATA. During the initial install, Fedora built a LVM with the two drives. Knowing I wanted to make use of the drive pooling feature (greyhole, don't ask), I removed the disk from the LVM and got started.

Fedora put /boot and / onto the ATA drive and the system installed fine. Once I had it running, I added a second SATA disk so now I had two 250GB SATA disks I planned to add to the pool. Problem was, I couldn't get a file system on the second SATA disk. I could run cfdisk to delete and create partitions, but when I tried to use mkfs to format the disk, I got an error saying the disk was busy.
I discovered two issues. The first was one of the SATA drives was disabled in bios (Dell Optiplex GX620). Its not clear if this caused an issue, because I was able to access the drives from within the OS

Second issue was Fedora was adding the disks to a RAID group automatically. I'm just starting to understand this, but I used mdadm to remove the disk from the raid group. (Use cat /proc/mdstat to see the name of the devices.) I then used mdadm -zero-superblock /dev/... to prevent the disks from being detected as part of a raid group.


Powershell and a Hash of Custom Objects

Thursday 16 of September, 2010
Like many companies, we regularly import data from our HR system to populate our Active Directory information. We wrote that script several years back using vbscript to read an Excel spreadsheet and populate AD. As we've been learning Powershell, it is time to re-write this task as an exercise in learning and training.

One of the first areas we needed to deal with was populating the address information for our various offices. The HR export contains only minimal location information (Alexandria, Philadelphia, etc) and we want the full street address in AD. To do this in PoSh, I created an associative array (or hash) of location objects that are custom PoSh objects. This article describe how we made this work. I'm sharing this so we can all learn something about PS hashes.

The detailed address info is kept in a static .CSV file with columns for each field (St, zip, city, etc). That is simply read with import CSV like this
{CODE (colors=powershell) }
import-csv $LocFile
{CODE}

Next we need to run through each location and build a custom object containing the address details for each. Building on the import, that code looks like this:

{CODE (colors=powershell) }
import-csv $LocFile| foreach {
$loc= new-object object
$loc |add-member noteproperty site $_.site
$loc |add-member noteproperty street $_.street
$loc |add-member noteproperty city $_.city
$loc |add-member noteproperty state $_.state
$loc |add-member noteproperty zip $_.zip
$sites$_.site=$loc
}
{CODE}


The first line reads the csv and pipes each row through the foreach command. Each row is a new location, so we create a new object to hold the address - the
{CODE (colors=powershell) }
$loc=new-object
{CODE} does this. We then add each element of the address to the object using add-member noteproperty.

The last piece of this is associating the location object we create with the site name so we can access the other details directly. This is accomplished by the command:
{CODE (colors=powershell)}
$sites$_.site=$loc
{CODE}

The $sites variable has to be initialized using
{CODE (colors=powershell) }
$sites=@()
{CODE} to tell PoSH that this is an array. Then each time the script executes.

{CODE (colors=powershell)}
$sites$_.site=$loc {CODE}
The object is added to the array of these custom objects.

When we process an employees account during the update, we read the employees location from the csv file where we store the city, we then use the city to look up the detailed address information from the array.
{CODE (colors=powershell)}
$EmpAD.l = $Locations$city.city
$EmpAD.st = $Locations$city.state
$EmpAD.streetAddress = $Locations$city.street
$EmpAD.PostalCode = $Locations$city.zip
{CODE}

Powershell and the lastLogonTimestamp

Tuesday 29 of June, 2010
I wrote a query that will find all AD accounts created more than 30 days ago that have Never
logged in or haven't logged in in over 60 days. I used Powershell v2 and the Quest AD module.
Here's the query I started with (reformatted for the reader, but this is a one-liner)

Get-QADuser -searchroot "corp.net/user accounts/users/OurOU" | where 
 { 
   ($_.whencreated -lt ((get-date).adddays(-30)) ) 
   -and 
         ( 
            ( $_.lastLogonTimestamp -like "Never") 
               -or 
            ($_.lastLogonTimestamp -lt ((get-date).adddays(-60)) 
         ) 
 }

When it runs, accounts that have never logged in are listed correctly, account that have been
logged in, generate an error:
"Bad argument to operator "-lt" : Cannot compare "Monday, June 16 2010" because it is not iComparible"
(The error is pointing to the last line of code above)

Since Monday, June 16 2010 looks like a date, I expected it to fail in the comparison to Never, but it fails in comparison to another date.

It turns out the Quest AD snap-in (which is a great tool, BTW), interprets the value of lastLogonTimestamp
to make it display nicely (really, who can understand 175234539836?). What I need is to process
my compare on the raw data. That is accessed by appending .value to the attribute. So the working code look like this:

Get-QADuser -searchroot "corp.net/user accounts/users/OurOU" | where 
 { 
   ($_.whencreated -lt ((get-date).adddays(-30)) ) 
   -and 
         ( 
            ( $_.lastLogonTimestamp.value -like "Never") 
               -or 
            ($_.lastLogonTimestamp.value -lt ((get-date).adddays(-60)) 
         ) 
 }

Now PowerShell can access the real value and transpose between variable types to get me the right answer.

\\Greg

Slackware 13.1 upgrade

Saturday 26 of June, 2010
I just ran the 13.1 upgrade for Slackware. For the most part it went well. One thing of note. Mysqld would not start.
The following error was in the log file:


100626 15:25:22 ERROR Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
100626 15:25:23 InnoDB: Started; log sequence number 0 1209759
100626 15:25:23 ERROR /usr/libexec/mysqld: unknown option '--skip-federated'
100626 15:25:23 ERROR Aborting

100626 15:25:23 InnoDB: Starting shutdown...
100626 15:25:28 InnoDB: Shutdown completed; log sequence number 0 1209759
100626 15:25:28 Note /usr/libexec/mysqld: Shutdown complete

I first focused on the errors regarding mysql-plugin, but after reading some forum posts,
I turned my attention to the 'unknown option 'skip federated'

I found the entry skip-federated in the /etc/my.cnf file and commented it out.
The server started just fine.

I then ran the 'mysql_upgrade to clean up the other error. Looks great now.


Signed Powershell scripts in the enterprise

Tuesday 29 of December, 2009
I want to start using Powershell within our company to manage repeating tasks and general administrative tasks. Powershell was deployed in a very secured configuration from Microsoft. So much so that it will not run a file-based script in default mode. I do not wish to break this secure default. I've learned to sign scripts so that I can safely downgrade security to 'allsigned' which will allow PS to run scripts that are signed with a trusted cert.

Next step is to deploy our code signing cert so that other machines can run them without manual intervention. The first part of that is to add the cert to as a trusted publisher to a group policy. The instructions for doing that are in this Best Practices (cache) document from MS. That worked great for us.

Next, we needed to roll out a group policy change to set the ExecutionPolicy to allsigned. This was accomplished using a group policy admin template (cache) from MS. Once this was installed and imported into the group policy manager, we were able to enable the ExecutionPolicy setting and set it to AllSigned. We then deployed the GPO to the appropriate machine OUs within our domain and Powershell was automatically reconfigured.

Now we're ready to start using signed scripts!

DD-WRT Wireless Bridge

Monday 28 of December, 2009
I had to move my office to a new room in the house that does not have a wired ethernet connection. Since running wires in our 100 yr-old house is painful, I went looking for a wireless solution. One of my chief drivers is 'not' spending money. I have a Linksys WRT54GS that I could make use of. I've been toying with the idea of using a DD-WRT (cache) firmware in this router, but never had a real need - until now.

DD-WRT is a 3rd party firmware for the WRT-series routers and is support, among other things, Wireless Client Bridging. Following these instructions at the DD-wrt wiki (cache), I was able to bridge my wired connection in the wRT54GS to my wireless FIOS network.

One item of note - I was unable to make the bridge work with WPA2 encryption on my Actiontec router. I had to back off to WPA encyption.

\\Greg

Windows 2008 R2 activation

Thursday 08 of October, 2009
I usually only post issues that I've personally resolved. This one was solved by one of my crack engineers (with help from the internet) and bears repeating.

We are trying to implement Windows Server 2008 R2 and are now making use of the corporate KMS server that resides out in the corporate cloud. We made the appropriate firewall changes but every attempt at activation was met with the error:
0x80070005 Access is denied: the requested action requires elevated privileges

Lots of research led to a single internet post here (cache).

It comes down to this. If you have a Group Policy applied to the server you are trying to activate that forces the Plug & Play service to Automatic startup, you will get this error. Change the group policy to not defined and all is well. Note this has nothing to do with the actual state of the P&P service. Stopping it will not help.

One other note. If you attempt to use MSDN product keys to activate a Win 2008 R2 domain member server under a similar GPO, those keys will NOT work either.

\\Greg

Signing Powershell scripts

Friday 02 of October, 2009
I've recently begun writing powershell scripts. I'm a bit late to this game, but better late than never. As the PS team did a great job of ensuring PS scripts were secure by default, I want to do the right thing and sign all my scripts rather than weaken the security setting.

That's easy to do. We have a CA on our domain and I signed up for a code signing cert from the server. I then wrote a small function to sign my scripts and added to my profile. It looks like this:
function signIt {
	Set-AuthenticodeSignature $args[0] @(Get-ChildItem cert:\CurrentUser\My -codesigning)[0]
    }

To sign a script you can enter this at a PS prompt:
signit c:\mycode\mypsscript.ps1

All well and good, right?


OK, so that worked fine. Several weeks later, I created a new script and when I tried to sign it I would get an "unknown error" saying the "data is invalid". It took a fair amount of googling with Bing to find no answer. I turned to the MS news groups and found and answer from Robert Robelo. He said this:

"It's the encoding.
By default PowerShell's ISE encodes a new script in BigEndian Unicode.
PowerShell can't sign BigEndian Unicode encoded scripts. (Oops!)
So, for any new script you create -or any created before- through the ISE that you want to sign, open it and set the encoding you prefer.
Besides BigEndian Unicode, the other valid values are:
ASCII
Default
Unicode
UTF32
UTF7
UTF8"

Sure enough, looking at the file encoding using Notepad++, I could the file encoded as UCS-2 BigEndian. I used Notepad++ to convert the file to UTF-8 and I was able to successfully sign the script. Hat's off to Robert for the tip. I'm documenting it here so others may find it easier.

\\Greg

Stopping Akonadi in KDE 4.2

Saturday 05 of September, 2009
I loaded Slackware 13 on a laptop I use mostly for remote administration and other casual uses. At every startup, Akonadi also starts up; the progress bar jumps on top and stays there for 30 seconds and its generally make a nuisance of itself. Since I'm not likely to use this laptop as a PIM, I poked around to get it to stop.

- First thing, by default, "Korganizer" runs in the tray for reminder notification. I right-clicked and disabled notifications.

- Next I looked at KResources in the "KDE Setting"s applet and made sure none of them we associated with an Akonadi store.

- Finally, I stopped the kres-migrator from running (and trying to convert my non-existant data into Akonadi) using this command:
kwriteconfig file kres-migratorrc group Migration key Enabled type bool false

Printing to the Lexmark e260dn from Linux

Saturday 29 of August, 2009
Found a great tip on how to print to this printer here (cache). Here's my attempt.

I have the e260dn attached to a WinXP desktop that has IPP printing enabled.
- Download the PPD file for the e352dn from here (cache) and save it to a file

- From within the Cups Printer mgmt page select Add Printer
- Specify the name and other details, click Continue
- For Device select Appsocket/HPJetdirect, click continue
- In the URI field enter http://printserver/printers/printersharename/.printer replacing printserver with the WinXp computername and printersharename with the windows printer share name.
- upload the PPD file saved in step one to the CUPS server in the 'Provide a PPD file' box, click Add Printer




Slackware 13 & Broadcom wireless

Monday 10 of August, 2009
What a pain this was to figure out. Here are my notes

- Read this link (cache)

- You need to install and build two packages from Slackbuild.org
-- b430-fwcutter
-- b43-firmware

-- run b43-fwcutter-012/b43-fwcutter -w ".lib/firmware wl_apsta_mimo.o

-- install wicd from /extra

--reboot