Loading...
 

Greg's Tech blog

My technical journal where I record my challenges with Linux, open source SW, Tiki, PowerShell, Brewing beer, AD, LDAP and more...

SWAG and Tiki

Friday 21 of May, 2021

Next in my docker journey was to bring up SWAG - Secure Web App Gateway.  It's a container from Linuxserver.io that combines nginz reverse proxy setup with LetsEncrypt acme client to provide a secure front-end to self-hosted web apps.  When SWAG is built in a docker-compose with web apps, it provides a secure (contained within a docker network) backend as well as HTTPS to all client connections.

SWAG  provides a bunch of predefined app-specific proxy config files.  Of course, there isn't one for tiki so I made one by modifying an existing sample for a subdomain.confg.  Code for that is below.

When I first fired it up, I was directed to the default SWAG landing page.  Some research reminded me that SWAG talks to the app via the internal network/port, not the external host & ports.  I had mistakenly set the upstream port to the external port I had defined for the tiki container.  Changing this to use port 80 against the container name fixed this.

I was also concerned that I needed to configure tiki with a cert in ordr to get a clean SSL experince for the client.  But nginx handles this nicely as the proxy server.  Very nice.

tiki.subdomain.conf:

 

## Version 2020/12/09

 

# REMOVE THIS LINE BEFORE SUBMITTING: The structure of the file (all of the existing lines) should be kept as close as possible to this template.

 

# REMOVE THIS LINE BEFORE SUBMITTING: Look through this file for <tags> and replace them. Review other sample files to see how things are done.

 

# REMOVE THIS LINE BEFORE SUBMITTING: The comment lines at the top of the file (below this line) should explain any prerequisites for using the proxy such as DNS or app settings.

 

# make sure that your dns has a cname set for <container_name> and that your <container_name> container is not using a base url

 

server {

 

    listen 443 ssl;

 

    listen " rel="">:443 ssl;

 

    server_name tiki tiki.gmartin.org; # blog blog.gmartin.org;

 

    include /config/nginx/ssl.conf;

 

    client_max_body_size 0;

 

    location / {

 

        include /config/nginx/proxy.conf;

 

        resolver 127.0.0.11 valid=30s;

 

        set $upstream_app tiki;

 

        set $upstream_port 80;

 

        set $upstream_proto http;

 

        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

 

    }

 

}

 

 

 

Prometheus & docker permission denied error

Wednesday 12 of May, 2021

I'm moving  all my self-hosted services to docker - specifically, docker compose.  I'm using this config for prometheus:

prometheus:

    image: prom/prometheus:latest

    # privileged: true

    volumes:

      -  /mnt/samba/Docs/docker/config/prometheus/config/prometheus.yml:/etc/prometheus/prometheus.yml

      -  /mnt/samba/Docs/docker/config/prometheus/data:/prometheus

      # -  /data/prometheus/config/prometheus.yml:/etc/prometheus/prometheus.yml

      # -  /data/prometheus/data:/prometheus

      # -  ./alertmanger/alert.rules:/alert.rules

    command:

      - '--config.file=/etc/prometheus/prometheus.yml'

    ports:

      - '9090:9090'

I ws getting this  error  on startup:
ERROR: for prometheus Cannot start service prometheus: OCI runtime create failed: container_linux.go:367: starting container process caused: chdir to cwd ("/prometheus") set in config.json failed: permission denied: unknown

After much testing, I added the user:"1000" command to force the container to run as my account.  I'll admit I  have a lot to learn about docker and permissions.

Here is the final yml:

prometheus:

    image: prom/prometheus:latest

    user: "1000"

    # privileged: true

    volumes:

      -  /mnt/samba/Docs/docker/config/prometheus/config/prometheus.yml:/etc/prometheus/prometheus.yml

      -  /mnt/samba/Docs/docker/config/prometheus/data:/prometheus

      # -  /data/prometheus/config/prometheus.yml:/etc/prometheus/prometheus.yml

      # -  /data/prometheus/data:/prometheus

      # -  ./alertmanger/alert.rules:/alert.rules

    command:

      - '--config.file=/etc/prometheus/prometheus.yml'

    ports:

      - '9090:9090'

 

 

 

More on moving to docker-compose

Thursday 29 of April, 2021

Here are the list of services or feature I want in DC. 
"x" are done.

  •  x volumes for all gpm/samba shares  
  •  x single mysql install
  •  x .env file
  • SWAG /let's encrypt
  •  x Tiki
  •  x greyhole
  • Nextcloud
  • plex
  • subsonic
  • booksonic
  • ghost
  • TinyPin
  • Cockpit
  • portainer

Moving to Docker-compose

Thursday 29 of April, 2021

I'm real late to the docker game, but listenting to the Self Hosted podcast recently has helped me realize the simplicity of this configuration.  Well, simple once  you understand it.  I spent the day today moving my native webservices to docker.  Here's how it went

Things I needed to move and make work

 

  • TikiWiki - been running this CMS for 18 years now.  Iit has to go with me
  • mariadb - it currently holds greyhole and tiki dtabases
  • greyhole connection to mariadb
  • nextcloud - I don't really need this, but I have it running in a standalong container and I want it in DC

 

It took me all day to get a working config just for mariadb.  All the issues were authentication related once the db spun up in a container.  Here's what I think I know:

 

  • apps in a container connect using "%" as the host
  • apps outside a container use localhost

 

Greyhole was a special case.  Since mariadb is in a container, localhost no longer works as a db_host.  I changed it to 127.0.0.1 and it worked fine.  One other issue - retstarting the docker-compose disconnects the session and greyhole must be restarted. I may consider a separate mariadb instance for this

For mariiadb migration, I was hoping if i simply mounted the physical mariadb folder into th container, that the container version would just use it, but that didn't work.  I h ad to dump and import the existing data and recreate users.

Home Assistand and Smartthings

Wednesday 28 of April, 2021

After resolving the disaster I had with th extra VM running, There was stil an issue with my motion detectors not communicating with HA.  After some debugging, I could see HA could talk to Smartthings, but ST could not send information back.  After some research, I found this question in the HA forums that speaks to the exact issue.  seems like my Remote URL was no longer configured through Nabu Casa. Once reconfigured, I restarted HA and all is well

Home Assistant issues

Wednesday 28 of April, 2021

I loaded Home Assistant in December and LOVE IT!  I've automated lights in my kitchen and office based on motion and have our decorative lights on a time-based automation as well. But, I've had some pains.  The biggest was my time-based automations wouldn't trigger on time.  I run HA OS in a VirtualBox VM.  Initially that VM was running on Debian.  Out of desparation to fix the timing issue (pretty basic, I know), I moved the VM to a WIndows host.  It solved the issue.  

What I failed to do was stop the Debian based VM from autostarting on reboot of the host.  Well, I rebooted the host and it came up. So the two servers had the same IP and HA was non-functional.  Below is the post I put up in the HA Community.

Copied from my HA post:

My HA has been a mess for three days I don’t know where to start. I use SmartThings to talk to my devices on zwave and zigbee. Sunday I installed a couple Inovelli black dimmers without excluding the black series switch devices I removed. About the same time, HA started acting very weird. So what is weird?

1 - When I go to integrations, various ones will report being ‘not loaded’. Tuya, Plex, SmartThings have all done this.

2 - on the Supervisor page, some installed add-ons are not listed some time and show loaded at others Mosquito Broker is the one I notice

3 - Lovelace pages load without status, then refresh and might have status. Or, they load with status, then refresh all greyed out.
3.5 Web pages refresh randomly - even on Firfox

4 - Various integrations aren’t displaying the reload option. SmartThings being the one I notice most.

5 - the iOS app displayed an SSL error and wouldn’t connect for two days. (working now)

6 - Supervisor kept switching from showing I was running core v21.4.3 to 21.4.6.

7 - Configuration.yaml entries went missing
8 - many automations just stopped working. (motion based/time based)

Details: Running core 21.4.6, supervisor 21.4.0, running HA OS 5.13 in a VM

VM disk is 35% usage. htop in the terminal says HA is using 1.1GB of 1.9GB

What I’ve done: Restored a Saturday full backup on Monday; restarted SmartThings; reinstalled the original switches (only due to an incompatibility with the LED lights I was hoping to control)

I’m at a lost on what to do next.

Here are recent log entries showing the many things not working well:

 

Updating tuya light took longer than the scheduled update interval

0:00:15 8:51:04 PM – (WARNING) Light - message first occurred at 9:34:59 AM and shows up 734 times Updating device list from legacy took longer than the scheduled scan interval

0:00:10 8:50:57 PM – (WARNING) Device tracker - message first occurred at 8:27:28 AM and shows up 1041 times Unexpectedly disconnected from Roomba 192.168.1.101, code Bad protocol

8:50:54 PM – (WARNING) /usr/local/lib/python3.8/site-packages/roombapy/roomba.py - message first occurred at 8:24:53 AM and shows up 689 times Timeout fetching MartinOrbi (Gateway) data

8:50:51 PM – (ERROR) UPnP - message first occurred at 8:23:52 AM and shows up 236 times Updating tplink switch took longer than the scheduled update interval 0

:00:30 8:50:38 PM – (WARNING) switch - message first occurred at 8:29:27 AM and shows up 38 times Error fetching roku data: Invalid response from API: Timeout occurred while connecting to device

8:50:18 PM – (ERROR) Roku - message first occurred at 8:23:47 AM and shows up 631 times Error scanning devices

8:50:17 PM – (WARNING) netgear - message first occurred at 8:27:48 AM and shows up 293 times Error talking to API

8:50:17 PM – (ERROR) /usr/local/lib/python3.8/site-packages/pynetgear/init.py - message first occurred at 8:27:48 AM and shows up 277 times Update of switch.tree_2 is taking over 10 seconds

8:49:48 PM – (WARNING) helpers/entity.py - message first occurred at 8:28:07 AM and shows up 144 times Could not connect to Plex server: HDA PLex (HTTPSConnectionPool(host='plex.tv', port=443): Read timed out. (read timeout=30))

8:43:01 PM – (ERROR) Plex Media Server - message first occurred at 8:26:44 AM and shows up 166 times Websocket connection failed, retrying in 15s: Cannot connect to host 192-168-1-3.b330529e76874d1089e7f2db80e08369.plex.direct:32400 ssl:default Try again

8:42:45 PM – (ERROR) /usr/local/lib/python3.8/site-packages/plexwebsocket.py - message first occurred at 8:28:12 AM and shows up 122 times

 

NSCLient++ and Real-time Eventlog checks

Monday 02 of November, 2020

I am trying to get NSCLient++ to work with NSCA to do real-time eventlog checks.  It's complicated, so here are my notes. The documentation on this is a bit thin so if there are holes here, comments are welcome.  And up front - a shout-out to the NSClient++ lead dev - Michael Medin.  He did a lot of work over many years to get the client in the shape its in.  This work is based on v5.2.35 of the client.

Concept

The real-time log & eventlog system has two parts.  I'll call them the filter (or sensor) and the reporter.  The filter/sensor decides what events to look for and is configured under these setting headings:

[["/settings/eventlog"|/settings/eventlog]]
[["/settings/eventlog/real-time"|/settings/eventlog/real-time]]

[["/settings/eventlog/real-time/filters/default"|/settings/eventlog/real-time/filters/default]]
[["/settings/eventlog/real-time/filters/check_WSUS"|/settings/eventlog/real-time/filters/check_myfilter]]

the root heading is not used by me
the /settings/eventlog/real-time heading is used to enable the real-time sensor and set some defauts:
/settings/eventlog/real-time
enabled=true
destination=NSCA
debug=true
startup age=30m
 

Sorting AD Objects by OU

Wednesday 30 of September, 2020

Today I was trying to a list of of AD computer objects to group them by OU.  The obvious way was to try sorting by distinguishedname.  But sorting is simply alphabetical and so it is simply a sort by the computer name since that is listed first

CN=RTC119Test,OU=Testing,DC=mydomain,DC=com

Then I tried canonicalname and got results I could use

mydomain.com/Testing/RTC119Test

Here is the Powershell I used

get-adcomputer -filter * -properties operatingsystem,canonicalName|select name,operatingsystem,canonicalName,distinguishedname|sort canonicalName|ft

Note: You must specify "-properties canonicalname" to force the server to return the canonicalname property  so you can use it.  

 

converting audio with ffmpeg

Monday 08 of April, 2019

Needed to convert a couple wma files to mp3 because their stream layout wouldn't play with subsonic without a special conversion command. This took care of it:

for file in *.wma; do ffmpeg -i "$file" -map 0:1? -b:a 320k -v 32 -f mp3 "/$(basename -s .wma "$file").mp3"; done   

The tricky part was using basename to cut off the extension so I could rename the file.

 

Manipulating Native Excel Files in Powershell

Wednesday 03 of October, 2018

The first time I used Import-CSV to access data in a CSV file, I knew I was leaving VBScript behind for PowerShell.  That was probably more than 10 years ago and I have had a lot of fun with PowerShell since then.  Today, I had reason to manipulate native Excel files (OOXML) and found the process as simple as CSV.  This all starts with a module from the PowerShell Gallery called ImportExcel and it can be installed runnning Install-Module ImportExcel in a pwsh window.

Once installed, accessing an excel file is as simple as accessing a csv file.  Execute

$Data = Import-Excel C:\temp\somefile.xlsx

As with Import-csv, you can now access the various columns of the worksheet using the header name and the rows by the array index of the $Data object.  

EX: $Data2.Name will display the value of the column Name in the second row.

 

Managing TPLink Wi-fi Smart Plug with Powershell

Thursday 14 of June, 2018

I was given a TPLink HP100 WiFi Smart Plug the other day.  HS100 It works wit TPLink's Kasa service and from your smart phone.  Kasa does support remote control of the devices and this is done through a Restful API service.  As  have a SmartThings hub, I went looking for a way to integrate it with that.  Integration isn't directly available but I found this useful article from the ITNerd Space blog that shows you how to use .json posts to a Rest service to manages the device.

My scripting language of choice is PowerShell (now available on Linux) so I wrote a couple scripts to turn the plug on & off and query the state.

The script handles all the setup, you need only provide it with your Kasa service credentials and the alias for the smart plug.

 

 

Building a BrewPi

Tuesday 23 of January, 2018

I was a technology geek for years prior to brewing. I have a degree in Computer Science and have worked as a Systems Engineer and Engineering manager for 30 years.  Since 2012, brewing gave me a hobby different in many ways from my work.  At the same time it feeds my need to learn new things. 
For Christmas this year my wife gave me a Tilt Hydrometer.  It's a great device that measures specific gravity by   floating in the wort and reports it via bluetooth to a smart phone app.

Of course, the first question I asked was - what if I want to log this data. A bit of research lead  me to the BrewPi software and then to Fermentrack. Fermentrack is derived from Brewpi and it has the ability to receive the bluetooth dat from Tilt and graph it over time.  Here's my  experience building the device. 

First, I bought  a CanaKit from Amazon.

 

 

High Resolution screen, Remote Desktop and VirtualBox

Wednesday 27 of September, 2017

I bought a 2016 Yoga laptop with a hi-res (3200 x 1800) screen. I'm running the Windows Insider builds of Windows 10. Running a hi-res screen turns up several issues with apps that aren't prepared for all the resolution. One area I had an issue with was remote desktop which I'venow fixed.

The frst thing you do to deal with this is change Windows desktop scaling factor . Windows recommends 250% scale factor for my machine and I'm using that. The next thing to do is read a great reference from Scott Hanselman on living the hi-res lifestyle.

What I exerienced with Remote Desktop with another Win 10 machine was a small window unreadable to my 55 and over eyesight. 

Doing more research, I came across this article from Falafel on Remote Desktop and Hi-Res.  The tip from Falafel is to make use of Remote Descktop Connection Manager and configure the desplay settings to use Full Screen. This will scale thee remote desktop window to match your local screen and it solved my problem.

The last issue was VirtualBox.  One of my remote PCs has a virtualBox VM running Slackware. After scaling the remote desktop I opened the VM and it had not scaled.  After saying "hmmm", I went poking around the display settings for the VM  I found the Scale Factor setting. Setting this  to 200% gave me a usuable VM in a remote desktop session.

Powershell on Linux

Monday 18 of September, 2017

I've been learning a lot about Microsoft's Linux initiatives over the past couple weeks.  I've started using Windows Services for Linux in lieu of putty for connecting to my Linux machines and recently started playing with their PowerShell implementation on Linux.  Last week I had a need to do some scripting on Linux and wanted to re-use some code I had on hand. 

PowerShell can be installed from the repository on most machines.  The PowerShell github page has the details on how to configure your package manager to draw directly from the repository.

For my challenge, I wanted to profile the download speed of a particular website I help manage.  I already have a PS script that does most of what I wanted.  It was a simple task of reconfguring it and testing to be sure all the features were available in the current Linux PS beta.  Here's the script.

$url = "http://files.myfakeco.com/downloads/filedownload/d3974543.zip"
$timer = measure-command {
    $req = system.Net.WebRequest::Create($url)
    try {
        $res = $req.GetResponse()
        $requestStream = $res.GetResponseStream()
        $readStream = New-Object System.IO.StreamReader $requestStream
        $data=$readStream.ReadToEnd()
    }
    catch System.Net.WebException {
        $res = $_.Exception.Response
    }
}
$sec =  $($timer.totalmilliseconds)/1000
$size = $res.Contentlength
$kbs =  "{0:N2}" -f ($size/$sec/1000)
$ssec =  "{0:N2}" -f $sec
echo "$size, $ssec, $kbs"
"$(get-date -f "MM-dd-yyyy hh:mm tt"), $($res.StatusCode), $size, $ssec, $kbs `r`n"|out-file -append /mnt/samba/Docs/dllog.txt
The script makes use of the .Net WebRequest API. The API downloads the file and reorts status and stats derived from timing the download using measure-command.

But the best part of this is that the exact code runs on Windows Powershell.  I only modified the code to meet my specific needs for this report.

Fun with WSL (Ubuntu on Windows)

Tuesday 15 of August, 2017

I'm running WIndows 10 1703 and have been toying with the Windows Subsystem for Linux (WSL). THis version is based on Ubuntu.  There is some fun it making it useful.  

SSH into WSL

I want to use putty from anywhere to access the shell. SSH requires a few things to make it useful.  Start the bash shell and edit /etc/ssh/sshd_config (sudo nano /etc/ssh/sshd_config)

  • Change the listener.
    • port 2222
  • Turn on Password Authentication (I'll discuss key auth in a bit)
  • Turn off Privilege separation. Rumor has it it isn't implemented
  • Allow TCP port 2222 in the Windows Firewall
  • Generate host key
    • sudo ssh-keygen -A
  • Restart ssh service
    • sudo service ssh --full-restart

You should be able to ssh into the host.

 

 

 

 

 

Using Powershell to post data to IFTTT WebHooks service

Monday 07 of August, 2017

IFTTT has many useful triggers and I like Webhooks because it can enable so many fun interactions.  My goal today is sending JSON key:value pairs to WebHooks from Powershell (my preferred scripting language and now available on Linux!).  

WebHooks will accept three named parameters vis JSON (also form data and url parameters) that can be referenced within the Action of your applet.  The paramaeters are named value1, 2 & 3. so the JSON should look like this: 

{
    "value1":  "Good Morning",
    "value3":  "That is all.",
    "value2":  "Greg"
}

PowerShell has two methods for posting this to a URL Invoke-WebRequest and Invoke-Restmethod.  The latter is apparently a wrapper of the former and return onthe the string output from the POST. Because of the possible error-checking needs, I'll focus on Invoke-WebRequest.  

Here is the code that made this work:

$BaseURL = "https://maker.ifttt.com/trigger/GMhit/with/key/enteryourkeyhere"
  1. Note: The key (last part of URL is user unique
  2. The Trigger here is GMhit and unique to me. You would declare your own in the IFTTT service

$body = @{ value1="Good Morning" value2="Greg" value3="That is all." }

  1. Either works. Webrequest return status code
  2. Invoke-RestMethod -URI $BaseURL -Body (ConvertTo-Json $body) -Method Post -ContentType application/json

Invoke-WebRequest -URI $BaseURL -Body (ConvertTo-Json $body) -Method Post -ContentType application/json

Notes:

  • Setting the ContentType to `application/json` is important here.  This call didn't work until this was set correctly.
  • The value names are fixed and cannot be customized.

Recovering from a Bad Drive in a Greyhole storage pool

Monday 13 of February, 2017

I run an Amahi home server which hosts a number of web apps (inlcuding this blog) as well a a large pool of storage for my home.  Amahi uses greyhole (see here and here) to pool disparate disks into a single storage pool. Samba shares are then be added to the pool and greyhole handles distributing data across the pool to use up free space in a controlled manner.  Share data can be made redundant by choosing to make 1, 2 or max copies of the data (where max means a copy on every disk).

The benefit over, say, RAID 5 is that 1) different size disks may be used; 2) each disk has its own complete file system which does not depend on disk grouping; 3) each file system is mounted (and can be unmounted) separately or on a different machine.

So right before the holidays, the 3TB disk on my server (paired with a 1 TB disk) started to go bad.  Reads were succeeeding but took a long time.  Eventually we could no longer watch video files we store on the server and watch through WDTV.  Here is how I went about recovering service and the data (including the mistakes I made).

  • Bought a new 3TB drive and formatted it with ext4 and mounted it (using an external drive dock) and added it to the pool as Drive6.
  • Told greyhole the old disk was going away (drive4)
    greyhole --going=/var/hda/files/drives/drive4/gh

    Greyhole will look to copy any data off the drive that is not copied elsewhere in the pool. It has no effect on the data on the `going` disk (nothing is deleted) except it could cause further damage. The command ran for several days and due to disk errors didn't accomplish much, so I killed the process and took a new tact.

     

I decided to remove the disk from the pool and attempt an alternate method for recovering the data. 

 

  This took about two weeks to accomplish due to drive errors.  And because I was making a disk image, I eventually ran out of space on the new disk before it completed.

  • Told greyhole the drive was gone.
    greyhole --gone=/var/hda/files/drives/drive4/gh 
    Greyhole will no longer look for the disk or the data on it.  It has no effect on the data on disk. 

  • Used safecopy to make a drive image of the old disk to a file on the new disk. (if you not used safecopy, check it out.  It will run different levels of data extraction, can be stopped and restarted using the same command and will resume where it left off.
    safecopy --stage1 /dev/sdd1 /var/hda/files/drives/Drive6/d1 -I /dev/null

rsync -av "/run/media/user/5685259e-b425-477b-9055-626364ac095e/gh/Video"  "/mnt/samba/"

 

  • Bought a  4TB drive and mounted it using an external dock as drive7; copied over and deleted the drive image from the Drive6.
  • Marked the 1TB drive (drive5) as going (see command above) and gone. This moved any good data off the 1TB drive to drive7 but left plenty of room to complete the drive image.

  • Swapped drive5 (1TB) and drive7 (4TB) in the server chassis. Retired the 1TB drive.

  • Mounted the bad 3TB drive in the external dock and resumed the safecopy using:
    safecopy --stage1 /dev/sdd1 /var/hda/files/drives/Drive7/d1 -I /dev/null
  • Mounted the drive image. The base OS for the server is Fedora 23. The drive tool inlcudes a menu item to mount a drive image.  It worked pretty simply to mount the image at /run/media/username/someGUID.

  • Used rsync to copy the data form the image to the data share.  I use a service script called mount_shares_locally as the preferred method for putting data into greyhole pool is by copying it to the samba share.  The one caveat here is that greyhole must stage the data while it copies it to the permanent location. That staging area is on the / partition under /var/hda.  I have about 300GB free on that partition so I had to monitor the copy and kill the rsync every couple hours. Fortunately, rsync handles this gracefully which is why I chose it over a straight copy.

 

A couple observations.  First, because of the way I had greyhole shares setup, I had safe copies of the critical data. All my docs, photos and music had a safe second copy. The data on the failed disk was disposable.  I undertook the whole process because I wanted to see if it would work and whatever I recovered would only be a plus.  

This took some time and a bit of finesse on my part to get the data back.  But I like how well greyhole performed and how having the independent filesystems gave me the option to recover data on my time. Finding safecopy simplified this a lot and added a new weapon to my recovery toolkit!.

 

Reset a Whirlpool Duet washer

Monday 06 of April, 2015

We accidentally started the washer with hot water feed turned off. When the washer tried to fill, it couldn't and generated F08 E01 error codes. After clearing the codes and restarting, we eventually got to a point where the panel wouldn't light up at all. Unplugging and re-plugging the power would do nothing except start the pump.
It was obvious it needed to be cleared. After too much searching, I found this link on forum.appliancepartspros.com (cache).

It tells you to press "Wash Temp", "Spin", "Soil" three times in a row to reset the washer. Once it resets, the original error will display. Press power once to clear it. After that - all was well (of course I turned on the water first)

Adjusting Brewing Water

Tuesday 10 of March, 2015

I recently got hold of (well, asked for and received) a water analysis from the Perkasie Borough Authority and have been staring at it for more than a month wondering what to do with it. I've read the section on water in Palmer's How To Brew and some of his Water book. These are both excellent resources and while I have a science background, they are quite technical and I've been unable to turn all the details into an action to take, if any, on my brewing water.

The March 2015 issue of Zymurgy (cache) has an article by Martin Brungard on Calcium and Magnesium that has helped me turn knowledge into action. At the risk of oversimplyfying the guidance, I want to draw some conclusions for my use.

Some of Martin's conclusions

  • A level of 50mg/L Calcium is a nice minimum for many beer styles
  • You may want less than 50mg/L for lagers (wish I knew that a week ago) but not lower than 20
  • A range of 10-20mg/L Magnesium is a safe bet for most beers
  • Yeast starters need magnesat the high end pf that range to facilitate yeast growth


The Water Authority rep who gave me the report said two wells feed water to our part of the borough. Looking at the two wells, the Ca and Mg values are similar averaging 85 mg/L and 25mg/L respectively.

This leaves my water right in the sweet spot for average beers styles. What about some of the edge cases like IPAs and lagers.

  • For lagers, next time I'll dilute the mash water with 50% reverse osmosis (RO) water to reduce the Ca to about 40. I may want to supplement the Mg to bring it back to 20.
  • For IPAs, I may want to add Mg to bring it up near 40 mg/L.


Building a Temperature Controller from an STC-1000

Sunday 04 of August, 2013

My son &amp; I have been brewing beer together for 8 months now. We've been very intentional about moving slowly into the process of building both our knowledge and our brew system. As I'm already a tech geek, it is real easy for me to become a brewing geek as well and to go broke in the process. When we started collecting brewing equipment, I agreed to try to buy everything half price. Home Brew Findshas been invaluable when looking for the cheapest way to solve a brewing problem.

With the summer months and the need to lager a Dopplebock, I converted a 20 yr old dorm fridge into a fermentation fridge using 1.5" foam insulation.
Fermentation Fridge
Fermentation Fridge
And while this allowed me to lager, in this configuration it doesn't actually control the temperature so I went looking for a way to do that.

I settled the Elitech STC-1000 as it is a cheap alternative to the Johnson Controls controller (cache). Of course, the latter controller is a full package with power cord and power connectors for the cooling and heating units. The STC-1000 unit by contrast consists only of the controller and control panel. Oh, and it is Celsius only so you need to convert. But Google makes that easy ("convert 68 to celsius") My unit cost $35 to make while the Johnson is about $70.

To make use of the STC-1000, I had to build it into a package that allows for convenient use. Here's how I did it.

When I looked at the size of the STC-1000, it appeared the right size to fit in a standard outlet box (in the US and it was real close in size to the GFCI cutout. I bought a plastic cover to hold the GFCI and duplex outlet. I then modified to make the GFCI opening maybe 1/4" longer.

faceplate mod Mounted STC-1000

Next step was to mount the duplex outlet and wire it up. Keep in mind we need to control heating and cooling so we need to power the outlets individually. To do this, you have to break the copper tab on the black wire side of the outlet. I didn't take a before picture, but here it is after mod.
Outlet Modification

Now we can run a wire from the heating side to one outlet and from the cooling side to the other.

The other tricky piece is understanding that the STC-1000 only provides a relay service for activating the heating and cooling circuits - it doesn't actually supply power. I dealt with that by tapping the in-bound hot lead (black wire) to both the heating and cooling connectors. This is seen here with the first loop coming from the power in going to heating and the second loop from heating to cooling:

Outlet Modification

To power the outlet, I took a short wire from the heating to the outlet and from the cooling to the other outlet. The white wiring is pretty straight forward. Simply tap the in-bound white wire and connect it to one of the white lugs. No need for separate connections as the common wire is, um, common.
Finally, we add a tension-reliever to the box, run the temperature sensor through it, mount the outlet and buckle it up

tensioner Mounted STC-1000

Notes:
- I used a new orange 25' extension cord for the power side. I cut it in half and used wire from the unused half to do the wiring. I then added a new plug to the remaining cord so I had a usable cord.
- The STC-1000 was $19, the extension cord - $10, the box and cover $6. So this controller cost $35 plus two hours labor.

- Here is a wiring diagram
Wiring Diagram

Using getElementByID in Powershell

Thursday 07 of March, 2013

I was asked to pull a piece of information from a web page using Powershell. Fortunately for me, the item is tagged by an html "ID" element. While testing I discovered the following code worked when I stepped through it line by line, but failed when run as a script.
(Note 1: The following is a public example, not my actual issue. This snippet returns the number vote total for the question)

$ie = new-object -com InternetExplorer.Application
$ie.Navigate('http://linuxexchange.org/questions/832/programming-logic-question')
$ie.Document.getElementByID('post-832-score')|select innertext


The code is straightforward. It creates a new COM object to runs Internet Explorer. navigates to a specific page, then looks for a specific "id" tag in the html and outputs the value. The problem we saw was when we attempted to run the $doc.getElementByID command we received an error saying it could not be run on a $null object.
The question was asked during the Philly PowerShell meeting that perhaps the script needed to wait for the $ie.Navigate command to complete before moving on. And indeed this appears to be an asynchronous command. That is, PowerShell executes it but doesn't wait for it to complete before moving on to the next command.

The solution was the addition of a single line of code:

while ($ie.Busy -eq $true) { Start-Sleep 1 }

It simply loops until $ie is no longer busy.

The revised script looks like this:

$ie = new-object -com InternetExplorer.Application
$ie.Navigate('http://linuxexchange.org/questions/832/programming-logic-question')
while ($ie.Busy -eq $true) { Start-Sleep 1 }
$ie.Document.getElementByID('post-832-score')|select innertext
$ie.quit()

Adding XCache for PHP on Fedora

Tuesday 26 of February, 2013

I want to run XCache on my Amahi server to help speed up some php apps. Details on adding it are found here (cache)

  • yum install php-devel
  • yum groupinstall 'Development Tools' (44 packages - yikes)
  • yum groupinstall 'Development Libraries' (78 packages)


cd /tmp
wget http://xcache.lighttpd.net/pub/Releases/3.0.1/xcache-3.0.1.tar.gz
tar xvfz xcache-3.0.1.tar.gz
cd xcache-3.0.1

phpize
./configure --enable-xcache
make
make install
(Note: I ran make test and one test failed "xcache_set/get test")

Our First Brew

Friday 28 of December, 2012

Priscilla, Kyle & I have been enjoying local craft beer recently. There are several local breweries (Free Will, Round Guys, Prism) and many pubs are now serving craft beer along with the mass market beer. It's no surprise then that I received a home brew kit for Christmas this year. Kyle & I brewed our first batch today. He likes to call it Frozen Water Brewing.

Here are a few prep notes:


Brew notes:

  • We used an old aluminum pot for brewing - seemed to work well.
  • Warm the extract early so it pours easily
  • When first boil happens, be ready for the over-boil. It happens fast. Pull the pot off the heat quickly
  • Sanitize, sanitizes, sanitize
  • The bottle filler tube from the True Brew kit works well to pull a sample out through the airlock hole. Sanitize tube before dipping
  • First spec gravity test, 4 hours after starting fermentation read 1.045 - right on target
  • Fermentation should be done when SG reaches 1.009 - 1.003


We celebrated by running back to Keystone for bottles in the snowstorm then stopping by Prism Brewing in North Wales to try their beer.

Now we wait.

Hiding WordPress Login page

Saturday 08 of December, 2012

Our security guy showed me how to harvest editor names Wordpress. This combined with the known location of the login page makes the site susceptible to script kiddies plying their wares. A simple way to combat this is to create a redirect page somewhere and then restricting access to wp-login.php to visits coming from that page. I borrowed this idea from here. To implement this, I created my redirect page and added the following to the .htaccess file for the site.

.htaccess
# protect wp-login.php
Files wp-login.php (wrap in angle brackets)
   Order deny,allow 
   RewriteEngine on 
   RewriteCond %{HTTP_REFERER} !^http://www.mywebplace.com/wp-content/uploads/anoddname.html$ [NC] 
   RewriteRule .* - [F] 

/Files (wrap in angle brackets)

These lines are interpreted like this:

  •  for all files called wp-login.php
    • default to deny
    • If the HTTP_Referrer is not anoddname.html
    • don't rewrite the page, but return Forbidden HTTP code

I then created 'anoddfilename.html' and added a meta-redirect like this:

AnOddname.html
META HTTP-EQUIV="refresh" CONTENT="0;URL=http://www.mywebplace.com/wp-login.php"

These changes worked as expected. The site was fine, but to login you have to visit the site by hitting anoddname.html first.  There is one problem.  You cannot logout form the site.  That's because to logout you call wp-login.php again with ?action=logout appended to the url. Since you are on a page other then AnOddName.html at the time, you are forbidden from getting to the wp-login.php

To fix this, I added two more lines to the .htaccess file

.htaccess more
RewriteCond %{QUERY_STRING} ^action=logout [NC]
RewriteRule .* - [L]

With these lines added, .htaccess now checks first to see if you are calling with "?action=logout" Query_String. If so, it does not rewrite and stops. The complete .htaccess section is now:

Complete .htaccess
# protect wp-login.php
Files wp-login.php (wrap in angle brackets)
    Order deny,allow
    RewriteEngine  on
    RewriteCond %{QUERY_STRING} ^action=logout [NC]
    RewriteRule .* - [L]w
    RewriteCond %{HTTP_REFERER} !^http://www.mywebplace.com/wp-content/uploads/tbirdsarego.html$ [NC]
    RewriteRule .* - [F]

/Files (wrap in angle brackets)