Loading...
 

Greg's Tech blog

My technical journal where I record my challenges with Linux, open source SW, Tiki, PowerShell, Brewing beer, AD, LDAP and more...

Watching the This Week in Tech (TWiT) network on PS3 Media Server

Thursday 10 of March, 2011


I've been working with the PS3 Media Server (cache) to stream video to my Sony Bravia TV (which is picky about formats). PMS also support web feeds and streams. Since some my favorite podcasts are from the TWiT network (cache), I did some work to make the shows available through PMS.
The WEB.conf file is used to set this up. I started by grabbing the YouTube playlists for each show, but the streams from youtube are slow, stutter and generally suck. Instead, I grabbed the RSS feeds from twit.tv and added the following to my Web.conf

videofeed.Web: Video,TWiT=http://feeds.twit.tv/twig_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/twit_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/ww_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/floss_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/ipad_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/mbw_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/tnt_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/sn_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/dgw_video_large

videofeed.Web: Video,TWiT=http://feeds.twit.tv/tnt_video_large


Next I need to figure out how to stream the live feed. Any idea how to find the H.264 feed?

Update: I found the feed address to stream TWiT Live (cache). Add this line to Web.conf:

videostream.Web: Video,TWiT=TwiTLive,http://bglive-a.bitgravity.com/twit/live/high

Streaming Video to Sony Bravia TV using Amahi

Thursday 10 of March, 2011

(Note: after this writing, I found that Amahi has a beta for PS3 Media Server. It is still untested but might be a better route to take).

I have a Sony Bravia KDL-46Z5100 and Sony PS3. <img src="tiki-view_blog_post_image.php?imgId=5" border="0" alt="image" /><img src="tiki-view_blog_post_image.php?imgId=6" border="0" alt="image" /> Both are capable of streaming via DLNA but they are very picky about the container and video formats they will consume. I want to be able to stream video from my Amahi HDA (cache) server. There are several DLNA apps available for the HDA, but I needed something that transcoded on-the-fly in order to get what I needed.

I had been playing with the <img src="tiki-view_blog_post_image.php?imgId=4" border="0" alt="image" />PS3 Media Server (cache). It is a java-based open source tool that was made to transcode. The project has matured significantly in the past year as the community has stepped in to push the project along after the founder got busy with life. I'd been using PMS on my Slackware server for the better part of '10, but that server was on a wireless connection and results were flaky.
With my Amahi server hardwired in to the network, it became a natural place for the PMS.

I installed PMS by un-tarred the files into a directory (/var/pms for my system). I hacked the service script for another java-based service so I could have it start at boot. That script can be downloaded here. It needs some work as it currently gives an error during shutdown, (but the app stops correctly). Save this as ps3_media_server in the /etc/init.d directory on your HDA. Then run these two commands:

chkconfig ps3_media_server on

service ps3_media_server start​

My next step will be to get this wrapped into a package for Amahi. Let me know if you can make use of this

\\Greg

Active Directory is the killer app for Powershell

Friday 25 of February, 2011
We did something not so smart in AD a few months back. To fix it, we needed to reset a bunch of passwords and clear the passwordneverexpires flag on some 250 accounts. PoSh<a href="tiki-editpage.php?page=PoSh" title="Create page: PoSh" class="wiki wikinew">?</a> to the rescue!<br />  Note: this is a Win2k3 domain that we don't own, so I have to use the Quest AD cmdlets.<br />

 To find the accounts did this &lt;/br&gt;

<div class="code">get-qaduser /path/to/OU -passwordneverexpires|select Name,DN,Samaccountname,passwordneverexpires &lt;/br&gt;| export-csv "C:\temp\file.csv"</div> <p>Next we took the exported csv file and added a column named password and generated a bunch of strong passwords.  What was left was a one-liner to make the changes (Note: I removed the  object definition from the first line of the csv)</p> <div class="code">import-csv "c:\temp\file.csv |foreach { get-qaduser $_.samaccountname | set-qaduser -userpassword $_.password -passwordneverexpires $false }</div> <p> The ease with which import-csv allows you to read in and address the fields of a csv/spreadsheet is incredible.  The way the Quest cmdlets and the MS AD cmdlets allow you to act on multiple accounts at once is powerful.</p> <br />

Amahi Drive Replacement

Sunday 13 of February, 2011
<p>My <a class="wiki external" target="_blank" href="http://www.amahi.org/" rel="external nofollow" style="text-decoration: none; ">Amahi HDA</a>&#160;<a class="wikicache" target="_blank" href="http://linux2.gmartin.org/tiki/lib/fckeditor/editor/tiki-view_cache.php?url=http://www.amahi.org" style="text-decoration: none; ">(cache)</a>&#160;server&#160;was reporting two SMART drive warnings. The boot drive and one of the storage pool drives had more bad sectors than SMART preferred and I've been looking for replacement drives for several weeks. I found a pair of 1TB seagate drives at <a class="wiki external" target="_blank" href="http://newegg.com" rel="external nofollow">NewEgg</a> for $55 ea. Their free shipping cinched the deal.</p> <p>Amahi <br /> My plan was this:</p> <ul> <li>Replace the bad (sata) disk in the pool</li> <li>Move the data to one of the new disks</li> <li>Remove the second drive (sata) form the pool</li> <li>Use this good sata drive to replace the failing ATA boot drive</li> <li>Add the second new sata drive to the pool</li> </ul> <p><br /> The storage pool in Amahi is handled by <a class="wiki external" target="_blank" href="http://code.google.com/p/greyhole/" rel="external nofollow">greyhole</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=http://code.google.com/p/greyhole/">(cache)</a>, a truly ingenious technology similar in idea to Microsoft's now defunct Drive Extender. But, using greyhole makes swapping out a pool drive a non-trivial operation.<br /> <br /> First step in changing a drive is to get the data off the drive. The tricky part was figuring out which physical drive had the error and then which drive that represented in the pool. Fortunately, i had two different drive manufacturers in the pool. Looking at the /dev/disk/by-id info allowed me to determine it was the WD drive in my pool. From there I determined the WD drive was mounted on /var/hda/files/drives/drive4.<br /> <br /> The command for telling greyhole to move the data from this drives is:</p> <div class="code"><tt> greyhole --going=/var/hda/files/drive/drive4/gh</tt></div> <p><br /> Greyhole will tell you to wait while it goes off to move the data. &#160;I had 150GB of files on the disk. It probably ran a hour or so (I didn't time it). Once finished, I could verify with the greyhole -s command that there was no data on the disk.<br /> <br /> Next step is to remove the disk from fstab. I always choose to comment out the line by adding a leading #. That way I could put it back without issue. &#160;Of course, I failed to make the fstab change prior to shutting off and removing the disk. Fedora would not boot afterwards telling me there was a disk error and dropping me to the shell to fix it. Oddly I could not edit fstab from that shell, so I put the disk back in, booted, edited fstab and shutdown again. At this point, all the storage pool data was on a single drive<br /> <br /> Next step was to get the replacement drive in place. I followed the instructions for <a class="wiki external" target="_blank" href="http://wiki.amahi.org/index.php/Adding_a_second_hard_drive_to_your_HDA" rel="external nofollow">adding a second drive to my system</a> <a class="wikicache" target="_blank" href="tiki-view_cache.php?url=http://wiki.amahi.org/index.php/Adding_a_second_hard_drive_to_your_HDA">(cache)</a>. It comes down to partitioning the disk, adding a file system (I use ext4) and mounting it to the right place. &#160;The hda-diskmount command will find the drive once formatted and suggest a location and fstab entry. This is handy and it worked as advertised.<br /> <br /> One thing of note: Amahi mounts drives under /var/hda/files/drives/driveX. It doesn't seem to detect whether old mount points are still in use so it always creates a new mount point. This is safe, but messy. When I added the new 1TB disk, it was assigned to 'drive5' even though there are 3 unused directories already.</p> <p>I added the fstab command and rebooted. I then added the new disk to the storage pool and everything was good. I got to thinking that all the data was still on the first disk and wondered if there was a way to balance the data across drives. A review of the greyhole help and a question in the #amahi IRC channel pointed me to the balance option. I ran</p> <div class="code">greyhole --balance&#160;</div> <p>and monitored the greyhole.log file to see greyhole making a bunch of data moves. It ran for awhile.</p> <p>Once that was complete, I had to tackle replacing the drive that contained the boot partition. &#160;For this, Clonezilla was the obvious choice. &#160;I don't have any experience with the tool. &#160;Since the disks were different sizes (the ata disk was 160GB and sata 250GB),I started with cloning only the Fedora boot and root partitions. This left me with an unbootable sata disk. I then tried a full disk to disk clone and this worked. &#160;I wound up with a 200MB boot partition, a 160GB root and 83GB unallocated space. &#160;The disk worked as expected.<br /> <br /> I wasn't satisfied with the unallocated space, so I turned to gParted Live CD to resize the root partiion to add the additional space. &#160;Again, this was flawless.<br /> <br /> At this point I had only one remaining task - Adding the second 1TB disk into the system and to the strorage pool. &#160;This went off without a hitch. &#160;I did one thing as a test. &#160;I removed the now unused /var/hda/files/drives/drive4 directory and when I ran hda-diskmount, sure enough it allocated that unused directory for the drive mount. &#160;I again ran greyhole to balance the data and when it completed, had ~80GB on each disk.</p> <br /> <br type="_moz" />

Amahi, Fedora & RAID

Sunday 28 of November, 2010

I've recently taken the plunge to convert most of the server duties for my network into an Amahi digital home assistant (cache). The Amahi product is superb and has matured quickly over the past 12-18 months. I had some interesting issues getting started, maybe this will help you.

Amahi is currently built over top Fedora 12. When I built my server, I had two drives in it, 1 ATA & 1 SATA. During the initial install, Fedora built a LVM with the two drives. Knowing I wanted to make use of the drive pooling feature (greyhole, don't ask), I removed the disk from the LVM and got started.

Fedora put /boot and / onto the ATA drive and the system installed fine. Once I had it running, I added a second SATA disk so now I had two 250GB SATA disks I planned to add to the pool. Problem was, I couldn't get a file system on the second SATA disk. I could run cfdisk to delete and create partitions, but when I tried to use mkfs to format the disk, I got an error saying the disk was busy.
I discovered two issues. The first was one of the SATA drives was disabled in bios (Dell Optiplex GX620). Its not clear if this caused an issue, because I was able to access the drives from within the OS

Second issue was Fedora was adding the disks to a RAID group automatically. I'm just starting to understand this, but I used mdadm to remove the disk from the raid group. (Use cat /proc/mdstat to see the name of the devices.) I then used mdadm -zero-superblock /dev/... to prevent the disks from being detected as part of a raid group.

Powershell and a Hash of Custom Objects

Thursday 16 of September, 2010

Like many companies, we regularly import data from our HR system to populate our Active Directory information. We wrote that script several years back using vbscript to read an Excel spreadsheet and populate AD. As we've been learning Powershell, it is time to re-write this task as an exercise in learning and training.

One of the first areas we needed to deal with was populating the address information for our various offices. The HR export contains only minimal location information (Alexandria, Philadelphia, etc) and we want the full street address in AD. To do this in PoSh, I created an associative array (or hash) of location objects that are custom PoSh objects. This article describe how we made this work. I'm sharing this so we can all learn something about PS hashes.

The detailed address info is kept in a static .CSV file with columns for each field (St, zip, city, etc). That is simply read with import CSV like this
{CODE (colors=powershell) }
import-csv $LocFile
{CODE}

Next we need to run through each location and build a custom object containing the address details for each. Building on the import, that code looks like this:

{CODE (colors=powershell) }

import-csv $LocFile| foreach {

$loc= new-object object

$loc |add-member noteproperty site $_.site

$loc |add-member noteproperty street $_.street

$loc |add-member noteproperty city $_.city

$loc |add-member noteproperty state $_.state

$loc |add-member noteproperty zip $_.zip

$sites$_.site=$loc

}
{CODE}


The first line reads the csv and pipes each row through the foreach command. Each row is a new location, so we create a new object to hold the address - the
{CODE (colors=powershell) }
$loc=new-object
{CODE} does this. We then add each element of the address to the object using add-member noteproperty.

The last piece of this is associating the location object we create with the site name so we can access the other details directly. This is accomplished by the command:
{CODE (colors=powershell)}
$sites$_.site=$loc
{CODE}

The $sites variable has to be initialized using
{CODE (colors=powershell) }
$sites=@()
{CODE} to tell PoSH that this is an array. Then each time the script executes.

{CODE (colors=powershell)}
$sites$_.site=$loc {CODE}
The object is added to the array of these custom objects.

When we process an employees account during the update, we read the employees location from the csv file where we store the city, we then use the city to look up the detailed address information from the array.
{CODE (colors=powershell)}
$EmpAD.l = $Locations$city.city
$EmpAD.st = $Locations$city.state
$EmpAD.streetAddress = $Locations$city.street
$EmpAD.PostalCode = $Locations$city.zip
{CODE}

Powershell and the lastLogonTimestamp

Tuesday 29 of June, 2010

I wrote a query that will find all AD accounts created more than 30 days ago that have Never
logged in or haven't logged in in over 60 days. I used Powershell v2 and the Quest AD module.
Here's the query I started with (reformatted for the reader, but this is a one-liner)

Get-QADuser -searchroot "corp.net/user accounts/users/OurOU" | where 
 { 
   ($_.whencreated -lt ((get-date).adddays(-30)) ) 
   -and 
         ( 
            ( $_.lastLogonTimestamp -like "Never") 
               -or 
            ($_.lastLogonTimestamp -lt ((get-date).adddays(-60)) 
         ) 
 }

When it runs, accounts that have never logged in are listed correctly, account that have been
logged in, generate an error:
"Bad argument to operator "-lt" : Cannot compare "Monday, June 16 2010" because it is not iComparible"
(The error is pointing to the last line of code above)

Since Monday, June 16 2010 looks like a date, I expected it to fail in the comparison to Never, but it fails in comparison to another date.

It turns out the Quest AD snap-in (which is a great tool, BTW), interprets the value of lastLogonTimestamp
to make it display nicely (really, who can understand 175234539836?). What I need is to process
my compare on the raw data. That is accessed by appending .value to the attribute. So the working code look like this:

Get-QADuser -searchroot "corp.net/user accounts/users/OurOU" | where 
 { 
   ($_.whencreated -lt ((get-date).adddays(-30)) ) 
   -and 
         ( 
            ( $_.lastLogonTimestamp.value -like "Never") 
               -or 
            ($_.lastLogonTimestamp.value -lt ((get-date).adddays(-60)) 
         ) 
 }

Now PowerShell can access the real value and transpose between variable types to get me the right answer.

\\Greg

Slackware 13.1 upgrade

Saturday 26 of June, 2010

I just ran the 13.1 upgrade for Slackware. For the most part it went well. One thing of note. Mysqld would not start.
The following error was in the log file:


100626 15:25:22 ERROR Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
100626 15:25:23 InnoDB: Started; log sequence number 0 1209759
100626 15:25:23 ERROR /usr/libexec/mysqld: unknown option '--skip-federated'
100626 15:25:23 ERROR Aborting

100626 15:25:23 InnoDB: Starting shutdown...
100626 15:25:28 InnoDB: Shutdown completed; log sequence number 0 1209759
100626 15:25:28 Note /usr/libexec/mysqld: Shutdown complete

I first focused on the errors regarding mysql-plugin, but after reading some forum posts,
I turned my attention to the 'unknown option 'skip federated'

I found the entry skip-federated in the /etc/my.cnf file and commented it out.

The server started just fine.

I then ran the 'mysql_upgrade to clean up the other error. Looks great now.

Signed Powershell scripts in the enterprise

Tuesday 29 of December, 2009

I want to start using Powershell within our company to manage repeating tasks and general administrative tasks. Powershell was deployed in a very secured configuration from Microsoft. So much so that it will not run a file-based script in default mode. I do not wish to break this secure default. I've learned to sign scripts so that I can safely downgrade security to 'allsigned' which will allow PS to run scripts that are signed with a trusted cert.

Next step is to deploy our code signing cert so that other machines can run them without manual intervention. The first part of that is to add the cert to as a trusted publisher to a group policy. The instructions for doing that are in this Best Practices (cache) document from MS. That worked great for us.

Next, we needed to roll out a group policy change to set the ExecutionPolicy to allsigned. This was accomplished using a group policy admin template (cache) from MS. Once this was installed and imported into the group policy manager, we were able to enable the ExecutionPolicy setting and set it to AllSigned. We then deployed the GPO to the appropriate machine OUs within our domain and Powershell was automatically reconfigured.

Now we're ready to start using signed scripts!

DD-WRT Wireless Bridge

Monday 28 of December, 2009

I had to move my office to a new room in the house that does not have a wired ethernet connection. Since running wires in our 100 yr-old house is painful, I went looking for a wireless solution. One of my chief drivers is 'not' spending money. I have a Linksys WRT54GS that I could make use of. I've been toying with the idea of using a DD-WRT (cache) firmware in this router, but never had a real need - until now.

DD-WRT is a 3rd party firmware for the WRT-series routers and is support, among other things, Wireless Client Bridging. Following these instructions at the DD-wrt wiki (cache), I was able to bridge my wired connection in the wRT54GS to my wireless FIOS network.

One item of note - I was unable to make the bridge work with WPA2 encryption on my Actiontec router. I had to back off to WPA encyption.

\\Greg

Windows 2008 R2 activation

Thursday 08 of October, 2009

I usually only post issues that I've personally resolved. This one was solved by one of my crack engineers (with help from the internet) and bears repeating.

We are trying to implement Windows Server 2008 R2 and are now making use of the corporate KMS server that resides out in the corporate cloud. We made the appropriate firewall changes but every attempt at activation was met with the error:

0x80070005 Access is denied: the requested action requires elevated privileges

Lots of research led to a single internet post here (cache).

It comes down to this. If you have a Group Policy applied to the server you are trying to activate that forces the Plug & Play service to Automatic startup, you will get this error. Change the group policy to not defined and all is well. Note this has nothing to do with the actual state of the P&P service. Stopping it will not help.

One other note. If you attempt to use MSDN product keys to activate a Win 2008 R2 domain member server under a similar GPO, those keys will NOT work either.

\\Greg

Signing Powershell scripts

Friday 02 of October, 2009

I've recently begun writing powershell scripts. I'm a bit late to this game, but better late than never. As the PS team did a great job of ensuring PS scripts were secure by default, I want to do the right thing and sign all my scripts rather than weaken the security setting.

That's easy to do. We have a CA on our domain and I signed up for a code signing cert from the server. I then wrote a small function to sign my scripts and added to my profile. It looks like this:

function signIt {
	Set-AuthenticodeSignature $args[0] @(Get-ChildItem cert:\CurrentUser\My -codesigning)[0]
    }

To sign a script you can enter this at a PS prompt:

signit c:\mycode\mypsscript.ps1

All well and good, right?


OK, so that worked fine. Several weeks later, I created a new script and when I tried to sign it I would get an "unknown error" saying the "data is invalid". It took a fair amount of googling with Bing to find no answer. I turned to the MS news groups and found and answer from Robert Robelo. He said this:

"It's the encoding.
By default PowerShell's ISE encodes a new script in BigEndian Unicode.
PowerShell can't sign BigEndian Unicode encoded scripts. (Oops!)
So, for any new script you create -or any created before- through the ISE that you want to sign, open it and set the encoding you prefer.
Besides BigEndian Unicode, the other valid values are:
ASCII
Default
Unicode
UTF32
UTF7
UTF8"

Sure enough, looking at the file encoding using Notepad++, I could the file encoded as UCS-2 BigEndian. I used Notepad++ to convert the file to UTF-8 and I was able to successfully sign the script. Hat's off to Robert for the tip. I'm documenting it here so others may find it easier.

\\Greg

Stopping Akonadi in KDE 4.2

Saturday 05 of September, 2009

I loaded Slackware 13 on a laptop I use mostly for remote administration and other casual uses. At every startup, Akonadi also starts up; the progress bar jumps on top and stays there for 30 seconds and its generally make a nuisance of itself. Since I'm not likely to use this laptop as a PIM, I poked around to get it to stop.

- First thing, by default, "Korganizer" runs in the tray for reminder notification. I right-clicked and disabled notifications.

- Next I looked at KResources in the "KDE Setting"s applet and made sure none of them we associated with an Akonadi store.

- Finally, I stopped the kres-migrator from running (and trying to convert my non-existant data into Akonadi) using this command:

kwriteconfig file kres-migratorrc group Migration key Enabled type bool false

Printing to the Lexmark e260dn from Linux

Saturday 29 of August, 2009

Found a great tip on how to print to this printer here (cache). Here's my attempt.

I have the e260dn attached to a WinXP desktop that has IPP printing enabled.
- Download the PPD file for the e352dn from here (cache) and save it to a file

- From within the Cups Printer mgmt page select Add Printer
- Specify the name and other details, click Continue
- For Device select Appsocket/HPJetdirect, click continue
- In the URI field enter http://printserver/printers/printersharename/.printer replacing printserver with the WinXp computername and printersharename with the windows printer share name.
- upload the PPD file saved in step one to the CUPS server in the 'Provide a PPD file' box, click Add Printer



Slackware 13 & Broadcom wireless

Monday 10 of August, 2009

What a pain this was to figure out. Here are my notes

- Read this link (cache)

- You need to install and build two packages from Slackbuild.org
-- b430-fwcutter
-- b43-firmware

-- run b43-fwcutter-012/b43-fwcutter -w ".lib/firmware wl_apsta_mimo.o

-- install wicd from /extra

--reboot

Cisco VPN Client problem

Monday 11 of May, 2009

My company uses Cisco VPN 5.x and Alladin eTokens for 2 factor authentication.
I was getting this error trying to use my token for vpn:
Error 32: unable to verify certificate.

Turning up logging in the VPN client dug up some more detail that said “Cert chain missing”

So I opened the certificate manager in IE (tools, Internet options, Content, Certificates). It listed my certificate in there under Personal.
I viewed the cert and it was listed as invalid.
Under Certification Path it showed the cert chain was failing for the Root CA.
There was, conveniently, an Import button. I pressed it and voila, the Root CA cert was imported.

I was then able to successfully login using the token.

(Note: If it matters, we have an Enterprise Root CA and an Intermediate CA in our network. All certs are issues from the intermediate)

Clean up old computer accounts

Wednesday 06 of May, 2009

We needed a way to delete aging computer accounts from AD. This script uses the DS* tools from MS (included in Win2k3, Win2k8 and Vista).

Notes:

  • You need to specify the root OU and directories for the email tool.
  • You need to specify the inactive timer (currently 12 weeks)
  • You need to set the search limit (currently 100 accounts
  • To make it take action, you must call it with a parameter of 'Prod' else it will run in test (no delete mode)
  • Any computer account that has the string !!Do Not Delete!! in the description will not be deleted.
  • Any computer account with child objects (e.g. virtual server hosts) will not be deleted.
  • the script uses blat to send email with results. You can rip that out by commenting out the line 'goto :-SendReport'



Please leave a comment should you make use of this tool.


{CODE()}
@echo off
:: FindAgingCompAccts - GjM - 5/1/09
:: Uses MS tools (dsquery, dsget, dsmod) to locate inactive accounts and disable them
:: Computer accounts with !!Do Not Delete!! in the Description will not be disabled.
::
:: set blatbin, dsbin, SCRIPT_DIR, & Mode before running
::

::blatexe is directory containing blat
setlocal
Set blatexe=c:\netadmin\bin\blat.exe

:: dsbin is location of dsquery & other tools (leave blank if in path)

:: dsbin is location of dsquery & other tools (leave blank if in path)
set dsbin=

::SCRIPT_DIR is location of this script - created dynamically based on calling location
set SCRIPT_DRV=%~d0
set SCRIPT_DIR=%~p0
echo scriptdir: %SCRIPT_DIR%
set LogDir=%SCRIPT_DIR%logs
set TempDir=%SCRIPT_DIR%temp
set DataDir=%SCRIPT_DIR%data
set OldAcct=No value assigned\oldacct.txt
set logfile=No value assigned\Oldcomp.log
set actlog=No value assigned\action.log
set inactlog=No value assigned\inaction.log
set errlog=No value assigned\error.log
set resultfile=%TempDir%\results.log
set tempout=%TempDir%\temp.log

set RootOU="DC=corp,DC=com"

:: Call batch file with PROD as a parameter in order to disable accounts
set MODE=%1
if NOT DEFINED MODE (
set MODE=Test
echo The script must be called with a parameter of 'Prod' in order to_

change accounts (ex: 'FindAgingCompAccts Prod')
)
echo Mode is: %MODE%
set SKIP_FLAG=!!Do Not Delete!!
set INACTIVE_PERIOD=12
set ISFlagged=0

::for search_limit use 0 to find all inactive accounts
set Search_Limit=100

cd %LOGDIR%
::Cleanup previous session
copy action_history.log+action.log action.tmp
del action_history.log
ren action.tmp action_history.log

copy error_history.log+error.log error.tmp
del error_history.log
ren error.tmp error_history.log

del No value assigned
del No value assigned
del No value assigned
del No value assigned

set ActCount=0
set SkipCount=0
set PrevCount=0
set ErrCount=0
set count=0

::cd %WORK_DIR%

::query AD for inactive accounts
echo %Date% %Time% Starting automatic account maintenance to clean inactive computer accounts
echo %Date% %Time% Starting automatic account maintenance to clean inactive computer accounts >>No value assigned
echo Querying inactive accounts
echo %Date% %Time% >%OldAcctNo value assigneddsbin%dsquery computer %RootOU% -inactive %INACTIVE_PERIOD% -limit %Search_Limit% 1>%OldAcct% 2>dsquery.err
if No value assigned NEQ 0 goto :ERR

::Count inactive accounts
for /f "delims=?" %%a in (%OldAcct%) do set /a count+=1 >nul
echo Inactive accounts to process: No value assigned

:ProcessInactiveAccounts
::This is the main script loop
::Loop through the list of inactive accounts and check their status
for /f "delims=?" %%a in (%OldAcct%) do call :ChkUserStatus %%a
goto :-SendReport
cd %SCRIPT_DIR%
goto :EOF

:ChkUserStatus
:: Check description for flag that tells us not to disable
:: Disable account if not flagged
::echo on
set CN=%1
echo %CN%
if %CN%=="" goto :EOF

for /f "delims=: tokens=2" %%b in ('No value assigneddsget computer -desc -q -L
"%CN%
"') do (

:: %%b contains the description from AD. This line uses findstr to look for the FLAG in the description
echo "%%b" |findstr /i /c:"%SKIP_FLAG%" >nul
:: findstr returns errorlevel 1 if no match is found
if ERRORLEVEL 1 (
call :-DeleteAcct %CN%
) ELSE (
call :-SkipAcct %CN%
)
)
goto :EOF

:-DeleteAcct
::Delete the account
if %MODE%==Prod (
echo Trying to delete computer account: %CN% >> No value assigned
echo Trying to delete computer account: %CN%
set /a ActCount+=1

for /f "tokens=2 delims=: " %%c in ('dsrm
"%CN%
" -noprompt -subtree 2
>
&1 ^|findstr "failed" ') do (

if /i %%c EQU failed (
echo Error deleting %CN%
echo Error deleting %CN% >>No value assigned
set /a ErrCount+=1
set /a ActCount-=1
) else (
echo Computer account deleted: %CN% >> No value assigned
echo Computer account deleted: %CN%
set /a ActCount+=1
)
)

) else (
echo Mode is %MODE% - not deleting, %CN% >>No value assigned
echo Mode is %MODE% - not deleting, %CN%
set /a TestCount+=1

)
goto :EOF

:-SkipAcct
::Log accounts not being disabled
echo Account flagged, skipping computer, %1 >>No value assigned
echo Account flagged, skipping computer, %1
set /a SkipCount+=1
goto :EOF

:-SendReport
echo Mode is: %MODE%
echo DeletedAccounts: %ActCount%
echo FlaggedAccounts: %SkipCount%
echo ErrorAccounts: %ErrCount%
echo Test Accounts: %TestCount%

echo Mode is: %MODE% >>No value assigned
echo Deleted Accounts: %ActCount% >>No value assigned
echo Flagged Accounts: %SkipCount% >>No value assigned
echo Error Accounts: %ErrCount% >>No value assigned
echo Test Count: %TestCount% >>No value assigned

echo See inaction.log at \\exchmonitor\c$%SCRIPT_DIR% >>No value assigned
type %SCRIPT_DIR%\usagenote.txt >> No value assigned
if %Mode%==Prod No value assigned No value assigned -tf %SCRIPT_DIR%\recips.txt -subject_

"Computer account maintenance" -attacht %WORK_DIR%\results.txt -attacht_

No value assigned -attacht No value assigned -server smtpint.corp.com -f _
AccountManagers@corp.com
goto :EOF

:IsDisabled
:: Checks user disable flag and sets ISDIS to 1 if disabled
for /f "delims=: tokens=2" %%c in ('dsget user -disabled -q -L %1') do (
for %%e in (%%c) do (
if %%e==yes (
set ISDIS=1
) else (
set ISDIS=0
)
)
)
goto :EOF

:ERR
echo Error retrieving inactive computer accounts >>No value assigned
echo No value assigned >>No value assigned
echo Error retrieving inactive computer accounts
type %SCRIPT_DIR%\usagenote.txt >>No value assigned
No value assigned No value assigned -tf %SCRIPT_DIR%\recips.txt -subject "Error with _
computer account maintenance" -attacht No value assigned -attacht No value assigned -server _
smtpint.corp.com -f AccountManagers@corp.com _
goto :EOF



{/CODE}

A better nagios interface

Tuesday 03 of March, 2009

I'm always hacking at something. Today I turned my 3.0.3 Nagios interface into something more exciting. I converted it to the Nuvola style (cache) which I think looks great. Especially compared to the plain and aged Nagios interface.

While playing with the interface, I had the idea of adding a countdown timer until page refresh so I went looking for a way to do it.

First I found a javascript timer at hashemian.com (cache) (thanks Robert). I copied the countdown.js file to the root of my web server to save him bandwidth.

Nagios will include customer header or footer files if you create them in the right place and name them correctly. I created a file called status-footer.ssi in the nagios/ssi directory. The contents of the file are as follows:

<script language="JavaScript">
TargetDate = new Date();
TargetDate.setSeconds(TargetDate.getSeconds() + 90);

BackColor = "ltgray";
ForeColor = "dkgray";
CountActive = true;
CountStepper = -1;
LeadingZero = true;
DisplayFormat = "Status refresh in: %%M%% minute %%S%% seconds.";
FinishMessage = " Refresh!";
</script>
<div class='countdown'>
<script language="JavaScript" src="/countdown.js"></script>
</div>
</BODY>
</HTML>

This code is a modified version of what Robert posted to his blog. The first two lines set the countdown time to now plus 90 seconds which is the default refresh time for the pages.

I then

Searching with Index Service

Tuesday 10 of February, 2009

Sorry I'm so late to this party, but...

We have an intranet site that uses Frontpage's search webbot. It stopped working and because I didn't want to learn how it did work, I decided to implement the search using Windows Index service and .asp pages. Here's how I did it

The problem:

  • Documents are separated into to high level groups, In house and home workers. Separate catalogs exists for these document trees
  • Searches must be restricted to documents in the tree
  • Documents consist of a collection of .htm files contained within a particular subdirectory within the tree
  • a search from a given document should only return hits from that document


Solution:

  • Create a search form to be added to the index page of each document
  • Have the form post to a results.asp page within the same directory as the document
  • Use the server variable SCRIPT_NAME to capture the directory of the current document
  • Use the directory to limit the scope of the search
  • Use an include statement in results.asp to call the real search script



The search form

Note: This is an html file stripped of html. Sorry for the confusion, but this is a well know problem and you can find the info other places.

form action="results.asp" method=post
Search text:
input type=text name="searchstring" size="50" maxlength="100" value=" "
Ex: 'meter percent' will return documents with either meter or percent
Ex: 'meter & percent' will return documents with both meter and percent
button type=submit>Submit
button type=reset>Clear Form
/form

Results.asp
(Properly formatted html file with this in the body)
#include virtual="/_Search/search.asp"

The Search.asp file
Again, any html is stripped. In particular you have to provide the table definition for the response.write statements. contact me if you need details.


' search.asp - Greg Martin - feb 2009
' uses index service to provide search function for Reedman
' 
' search.asp is meant to be 'included' in results.asp.  In fact, it 
' relies on the fact that it is called through results.asp located in the directory of the document that need to be searched 
' so that the current directory can be read from the "SCRIPT_NAME server variable
'
' This section sets the various configuration variables

Dim formscope, pagesize, maxrecords, searchstring
dim catalogtosearch, searchrankorder, origsearch
dim q, filepath, CRLF, rs
Dim lsPath, arPath, CurPath


formscope="/"
pagesize = 500
maxrecords=500
searchstring=request.form("searchstring")

' we have two catalogs - one for homeworkers and one for Prod
' here we select the catalog based on the location fo the results page
if instr(Request.ServerVariables("SCRIPT_NAME"),"homeworker") > 0 then
	catalogtosearch="eHW_Search"
	response.write "Searching eHomeworker catalog"
else
	catalogtosearch="Prod_Search"
	response.write "Searching Production catalog"
end if

searchrankorder="rank[d]"
origsearch=searchstring
CRLF = Chr(13) & Chr(10)

if trim(searchstring) = "" then

   response.write "No search terms entered"

else
	
	' this code effectively removes the filename and leaves the path
	' it also replaces / with \ so the regex search works
	lsPath = Request.ServerVariables("SCRIPT_NAME")
	arPath = Split(lsPath, "/")
	arPath(UBound(arPath,1)) = ""
	CurPath = Join(arPath, "\")
	
	'This section performs the query
	
	set q=server.createobject("ixsso.query")
	
	' this query restricts results to docs in the filepath using regex & #path
	filepath= "*" & CurPath & "*"
	q.query=searchstring & " & #path " & filepath
	
	q.catalog=catalogtosearch
	q.sortby=searchrankorder
	q.columns="doctitle, filename, size, write, rank, directory, vpath, path"
	q.maxrecords=maxrecords
	
	
	'Displays the results
	set rs=q.createrecordset("nonsequential")
	rs.pagesize=pagesize
	response.write"<p>Your search for <b>" & origsearch & "</b> produced "
	
	if rs.recordcount=0 then response.write "no results"
	if rs.recordcount=1 then response.write "1 result: "
	if rs.recordcount>1 then response.write(rs.recordcount) & " results: </p>"
	
	response.write "I cannot post this code"
	
	
	do while not rs.EOF
	   response.write 	"I cannot post this code
	   rs.movenext
	loop
	
	response.write ""
	
end if

set rs=nothing
set q=nothing
set util=nothing



Substrings in Windows batch scripts

Thursday 22 of January, 2009

I needed a way to convert the contents of the %DATE% variable (which looks like: Thu 01/22/2009) into mmddyy format. The windows SET command can do this.

 Set Shortdate=%DATE:~4,2%%DATE:~7,2%%DATE:~12,2% 

So, let's disect it by looking at a smaller portion first. (Remember that Windows command shell variables are reference as No value assigned. When you use the Set command, the variable name can have modifiers)

Set Shortdate=%DATE:~4,2% 


%DATE% is the variable we are searching.
%DATE:~4 says to start at offset of four from the beginning of the string
,2% says return two characters.
So

 Set Shortdate=%DATE:~4,2%
 echo %Shortdate% 


Would print: 01 (assuming it is January in the US)
So we could string this mechanism together like this

 Set Shortdate=%DATE:~4,2%
 Set shortdate=%shortdate%%DATE:~7,2%
 Set shortdate=%shortdate%%DATE:~12,2% 

or really shorten the process as shown in the first example



MythTV Time

Friday 26 of December, 2008

I'm building my own PVR. Lots of reasons why, but the biggest is the hassle of trying to continue using the VCR. So in the the new century I come - again..

I have a Dell GX280 that I won at a work give-away. 2GB RAM, 2.4GHz Pentium-4.
Video out - Diamond HD 2600xt Sb Edition (ATI Radeon)
Video in - A loaner that I'm not sure of.

I installed Mythbuntu 8.10 without incident.
I found this in the dmesg log

 Linux video capture interface: v2.00
[   10.024756] ivtv:  Start initialization, version 1.4.0
[   10.024846] ivtv0: Initializing card #0
[   10.024855] ivtv0: Unknown card: vendor/device: 4444/0016
[   10.024861] ivtv0:               subsystem vendor/device: 1002/ffff
[   10.024867] ivtv0:               cx23416 based
[   10.024871] ivtv0: Defaulting to Hauppauge WinTV PVR-150 card
[   10.024875] ivtv0: Please mail the vendor/device and subsystem vendor/device IDs and what kind of
[   10.024880] ivtv0: card you have to the ivtv-devel mailinglist (www.ivtvdriver.org)
[   10.024885] ivtv0: Prefix your subject line with [UNKNOWN IVTV CARD].
[   10.024977] ivtv 0000:04:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
[   10.026646] tveeprom 0-0050: Huh, no eeprom present (err=-6)?
[   10.026652] tveeprom 0-0050: Encountered bad packet header [00]. Corrupt or not a Hauppauge eeprom.
[   10.026658] ivtv0: Invalid EEPROM
[   10.462996] cx25840 0-0044: cx25  0-21 found @ 0x88 (ivtv i2c driver #0)
[   10.469463] wm8775 0-001b: chip found @ 0x36 (ivtv i2c driver #0)
[   10.472660] wm8775 0-001b: I2C: cannot write 000 to register R23
[   10.475913] wm8775 0-001b: I2C: cannot write 000 to register R7
[   10.479109] wm8775 0-001b: I2C: cannot write 021 to register R11
[   10.482291] wm8775 0-001b: I2C: cannot write 102 to register R12
[   10.485503] wm8775 0-001b: I2C: cannot write 000 to register R13
[   10.488659] wm8775 0-001b: I2C: cannot write 1d4 to register R14
[   10.491875] wm8775 0-001b: I2C: cannot write 1d4 to register R15
[   10.495044] wm8775 0-001b: I2C: cannot write 1bf to register R16
[   10.498220] wm8775 0-001b: I2C: cannot write 185 to register R17
[   10.504745] wm8775 0-001b: I2C: cannot write 0a2 to register R18
[   10.522574] wm8775 0-001b: I2C: cannot write 005 to register R19
[   10.534303] wm8775 0-001b: I2C: cannot write 07a to register R20
[   10.547012] wm8775 0-001b: I2C: cannot write 102 to register R21
[   10.547834] ivtv0: Registered device video0 for encoder MPG (4096 kB)
[   10.547940] ivtv0: Registered device video32 for encoder YUV (2048 kB)
[   10.548046] ivtv0: Registered device vbi0 for encoder VBI (1024 kB)
[   10.548153] ivtv0: Registered device video24 for encoder PCM (320 kB)
[   10.548259] ivtv0: Registered device radio0 for encoder radio
[   10.548266] ivtv0: Initialized card #0: Hauppauge WinTV PVR-150
[   10.548367] ivtv:  End initialization

Disabling Inactive Active Directory accounts

Wednesday 24 of December, 2008

We needed a method to disable inactive accounts in Active Directory. The DSQuery tool has an -inactive switch & DSMod can disable accounts. problem is they are all or nothing affairs. We needed a way to exclude some accounts (system accounts & other special cases).

We accomplished this by adding some flag text to the description of the special accounts. The following script will search for accounts that haven't been used in 4 weeks and if they don't have the flag in the description, will disable them.

@echo off
:: FindAgingAccts - GjM - 12/22/08
:: Use as you'd like, please attribute - thanks
:: Uses MS tools to locate inactive accounts and disable them
:: Accounts with !!Do Not Disable!! in the Description will not be disabled.
:: 
Set blatbin=c:\acc
set dsbin=c:\acc\ad
set SCRIPT_DIR=\data\dev\aging_accts

:: Set MODE=Prod inorder to disables accounts
set MODE=Test
set WORK_DRV=C:
set WORK_DIR=%SCRIPT_DIR%\temp
set SKIP_FLAG=!!Do Not Disable!!
set INACTIVE_PERIOD=4

%WORK_DRV%
cd %WORK_DIR%
::Cleanup previous session
del results.txt
del action.log
del inaction.log
set ActCount=
set SkipCount=
set count=
copy inactive.old+inactive.txt inactive.tmp
del inactive.old
ren inactive.tmp inactive.old

::Locate old accounts
echo Starting automatic account maintenance
echo Querying inactive accounts
echo %Date% %Time% >inactive.txt
%dsbin%\dsquery user -inactive %INACTIVE_PERIOD% -limit 0  1>>inactive.txt 2>dsquery.err
if %errorlevel% NEQ 0 goto :ERR

::Count results
for /f "delims=? skip=1" %%a in (inactive.txt) do set /a count+=1 >nul
echo Inactive accounts to check: %count%

::Loop through the list of aging accounts and check their description
for /f "delims=? skip=1" %%a in (inactive.txt) do call :ChkUserStatus %%a
goto :SendReport
goto :EOF

:ChkUserStatus
:: Check description for flag that tells us not to disable
:: take action based on results
if %1=="" goto :EOF
for /f "delims=: tokens=2" %%b in ('%dsbin%\dsget user -desc -q -L %1') do (
	:: %%b contains the description from AD.  This line uses findstr to look for the FLAG in the description
	echo %%b |findstr /i /c:"%SKIP_FLAG%" >nul
	:: findstr returns errorlevel 1 if no match is found
	if ERRORLEVEL 1 (
		call :DisableAcct %1
	) ELSE ( 
		call :SkipAcct %1
	)
)
goto :EOF

:DisableAcct
::Disable the account
echo %Date% %Time%, Disabling User, %1 >>action.log
set /a ActCount+=1
if %MODE%==Prod dsmod user -disabled yes %1
goto :EOF

:SkipAcct
::Log accounts not being disabled
echo %Date% %Time%, Account flagged, skipping User %1 >>inaction.log
set /a SkipCount+=1
goto :EOF

:SendReport
echo Mode is: %MODE%
echo DisabledAccounts: %ActCount%
echo SkippedAccounts: %SkipCount%
echo DisabledAccounts: %ActCount% >>results.txt
echo SkippedAccounts: %SkipCount% >>results.txt
echo Mode is: %MODE% >>results.txt
echo See inaction.log at \\exchmonitor\c$%WORK_DIR% >>results.txt
::Note: The following must all be on a single line
%blatbin%\blat results.txt -tf recips.txt 
   -subject "Automatic account maintenance" 
   -attacht %WORK_DIR%\results.txt -attacht %WORK_DIR%\action.log
   -server exch05.my.com -f admin@my.com
goto :EOF

:ERR
echo Error retreiving inactive users
echo Error retreiving inactive users>>results.txt
goto :EOF


Notes:

  • The ds* tools from Win2k3 server must be available in the path or as defined in dsbin
  • This script uses blat to send smtp mail. If you aren't aware of it search the web
  • You must set the MODE variable to Prod for the script to make changes to AD.
  • If you wish to use a different flag, modify the SKIP_FLAG variable

Editing custom AD attributes

Tuesday 23 of December, 2008

I have the need to edit the employeeID and employeeNumber attributes in AD. These attributes are not exposed in ADUC. Here's some reference material and a short script for adding the capability.

  • There is a straight forward way of adding items to the right-click menu in AD using the Display Specificers in the AD Configuration container
  • You can use a simple VB script to edit simple attributes
  • You could edit more complex attributes by writing a more complex program (Say with VB), but we won't cover that here.


  • Create a script to edit the attribute
    Here is a simple script (eeID.vbs) to edit the employeeID.
    (Note: To test the script, call it from the command line with a full LDAP path to a user object (ex: 'cscript eeid.vbs LDAP://cn=gmartin,cn=users,dc=somedomain,dc=com'))


Create and save this script somewhere in your path.

' EEID.vbs - GjM - 12/22/08
' Displays and allows edits to employeeID atttribute in AD
Option Explicit
Dim Args, oUsr, sNewID
Set Args = Wscript.Arguments
Set oUsr = GetObject(Args(0))
sNewID = InputBox("LDAP path: " & Args(0) & vbCRLF & vbCRLF & "The Employee ID of the user is: " & oUsr.employeeID_
  & vbCRLF & "If you would like enter a new number or modify the existing number, enter the new number_
  in the textbox below")
if sNewID <> "" then 
	oUsr.Put "employeeID",sNewID
end if
oUsr.SetInfo
Set oUsr = Nothing

  • Add the item to the user admin context menu
    • Open ADSIEdit and connect to the Configuration container
    • Browse to CN=DisplaySpecifiers, CN=409 (or your language specifier)
    • Right click on CN=user-Display and select Properties
    • Highlight adminContextMenu and click Edit
    • Enter '6,&Employee ID, eeid.vbs' into the "Value to add" field and click Add
      • (Note: the number 6 represents the canonical order of the item in the conext menu. Feel free to play with this value to move your new item into the position you'd like.)
      • (Note: to remove this item from the contect menu, open the edit box again, highlight the EmploteeID line you added and click 'Remove')
    • Click OK to exit all the way out








Links
TechNet article that discusses this process, but beware if you do not know what the script there does (cache)
Article at softheap.com that discusses this, but missed a step (cache)

Slackware 12.1 & VirtualBox

Wednesday 25 of June, 2008

I am running Slack 12.1 as a virtual guest on my Vista laptop. I just spent sevarl hours over several days getting the linux additions to run. So here are some tips (thanks to T3slider over at LinuxQuestions.org for helping me get to the bottom of this.

To install this you mount the vbox additions iso and run ./VBoxLinuxAdditions.run

The first error I saw was from the install script saying something like:

Please install the build and header files for your Linux kernel


This waqs solved by loading the kernel-headers-2.6.24.5_smp-x86-2.tgz package from disk 1 of the Slackware set in the slackware/d directory. (try: installpkg kernel-headers-2.6.24.5_smp-x86-2.tgz)

After this the addditions still wouldn't compile, but the error was now hidden in a log file in /var/log/vboxadditions.log
The error made reference to:

ERROR: Kernel configuration is invalid
include/linux/autoconf.h or auto.conf are missing

or

bin/sh scripts/mod/modpost: No such file or directory


Took me awhile, but T3 mentioned off-hand that it sounded like the kernel-source was missing. I found that on disk 2 of the Slackware set in /slackware/k directory (try installpkg kernel-source-2.6.24.5_smp.tgz)

Take note that the installer adds several new scripts to /etc/rc.d and add references to the script to rc.local to load the drivers at start up.

I'm still having a problem with pointer integration. It says it is working, but while I see the mousein the window, it doesn't do anything. I'll post back when I know more.