Loading...
 
(Cached)

MartinsDen

Welcome to gmartin.org

Recovering from a Bad Drive in a Greyhole storage pool

Monday 13 of February, 2017

I run an Amahi home server which hosts a number of web apps (inlcuding this blog) as well a a large pool of storage for my home.  Amahi uses greyhole (see here and here) to pool disparate disks into a single storage pool. Samba shares are then be added to the pool and greyhole handles distributing data across the pool to use up free space in a controlled manner.  Share data can be made redundant by choosing to make 1, 2 or max copies of the data (where max means a copy on every disk).


The benefit over, say, RAID 5 is that 1) different size disks may be used; 2) each disk has its own complete file system; 3) each file system is mounted (and can be unmounted) separately.


So right before the holidays, the 3TB disk on my server (paied with a 1 TB disk) started to go bad.  Reads were succeeeding but took a long time.  Eventually we could no longer watch video files we store on the server and watch through WDTV.  Here is how I went about recovering service and the data.



  • Bought a new 3TB drive and formatted it with ext4 and mounted it (using an external drive dock) and added it to the pool as Drive6.

  • Told greyhole it the old disk was going away

greyhole --going=/var/hda/files/drives/drive4/gh



  • This told greyhole to move the data off the drive as it was being removed.  This ran for several days and due to disk errors didn't accomplish much, so I killed the process and took a new tact.

  • Told greyhole the drive was gonegreyhole --gone=/var/hda/files/drives/drive4/gh


  • Ran safecopy to make a drive image of the old disk to a file on the new disk. (if you not used safecopy, check it out.  It will run different levels of data extraction, can be stopped and restarted using the same command and will resume where it left off.


 safecopy --stage1 /dev/sdd1 /var/hda/files/drives/Drive6/d1 -I /dev/null


This took about two weeks to accomplish due to drive errors.  And evetually I ran out of space on the new disk before it completed.



  • Bought a  4TB drive mounted (drive7) it the the dock; copied and deleted the drive image from the Drive6.


  • Marked the 1TB drive (drive5) as going (see command above) and gone. This moved any good data off the 1TB drive to drive7 but left plenty of room to complete the drive image.




  • Swapped drive5 (1TB) and drive7 (4TB) in the server chassis. Retired the 1TB drive.




  • Mounted the bad 3TB drive in the external dock and resumed the safecopy using 



safecopy --stage1 /dev/sdd1 /var/hda/files/drives/Drive7/d1 -I /dev/null



  • Mounted the drive image. The base OS for the server is Fedora 23. The drive tool inlcudes a menu item to mount a drive image.  It worked pretty simply to mount the image at /run/media/username/some GUID.




  • Used rsync to copy the data form the image to the data share.  I use a service script called mount_shares_locally as the preferred method for putting data into greyhole pool is by copying it to the samba share.  The one caveat here is that greyhole must stage the data while it copies it to the permanent location. That staging area is on the / partition under /var/hda.  I have about 300GB free on that partition so I had to monitor the copy and kill the rsync every couple hours. Fortunately, rsync handles this gracefully which is why I chose it over a straight copy.


rsync -av "/run/media/user/5685259e-b425-477b-9055-626364ac095e/gh/Video"  "/mnt/samba/"


 


A couple observations.  First, because of the way I had greyhole shares setup, I had safe copies of the critical data. All my docs, photos and music had a safe second copy. The data on the failed disk was disposable.  I undertook the whole process because I wanted to see if it would work and whatever I recovered would only be a plus.  


This took some time and a bit of finesse on my part to get the data back.  But I like how well greyhole performed and how having the independent filesystems gave me the option to recover data on my time. Finding safecopy simplified this a lot and added a new weapon to my recovery toolkit!.


 


 


Reset a Whirlpool Duet washer

Monday 06 of April, 2015
We accidentally started the washer with hot water feed turned off. When the washer tried to fill, it couldn't and generated F08 E01 error codes. After clearing the codes and restarting, we eventually got to a point where the panel wouldn't light up at all. Unplugging and re-plugging the power would do nothing except start the pump.
It was obvious it needed to be cleared. After too much searching, I found this link on forum.appliancepartspros.com (cache).

It tells you to press "Wash Temp", "Spin", "Soil" three times in a row to reset the washer. Once it resets, the original error will display. Press power once to clear it. After that - all was well (of course I turned on the water first)


Adjusting Brewing Water

Tuesday 10 of March, 2015
I recently got hold of (well, asked for and received) a water analysis from the Perkasie Borough Authority and have been staring at it for more than a month wondering what to do with it. I've read the section on water in Palmer's How To Brew and some of his Water book. These are both excellent resources and while I have a science background, they are quite technical and I've been unable to turn all the details into an action to take, if any, on my brewing water.

The March 2015 issue of Zymurgy (cache) has an article by Martin Brungard on Calcium and Magnesium that has helped me turn knowledge into action. At the risk of oversimplyfying the guidance, I want to draw some conclusions for my use.

Some of Martin's conclusions
  • A level of 50mg/L Calcium is a nice minimum for many beer styles
  • You may want less than 50mg/L for lagers (wish I knew that a week ago) but not lower than 20
  • A range of 10-20mg/L Magnesium is a safe bet for most beers
  • Yeast starters need magnesat the high end pf that range to facilitate yeast growth

The Water Authority rep who gave me the report said two wells feed water to our part of the borough. Looking at the two wells, the Ca and Mg values are similar averaging 85 mg/L and 25mg/L respectively.

This leaves my water right in the sweet spot for average beers styles. What about some of the edge cases like IPAs and lagers.

  • For lagers, next time I'll dilute the mash water with 50% reverse osmosis (RO) water to reduce the Ca to about 40. I may want to supplement the Mg to bring it back to 20.
  • For IPAs, I may want to add Mg to bring it up near 40 mg/L.


Building a Temperature Controller from an STC-1000

Sunday 04 of August, 2013
My son & I have been brewing beer together for 8 months now. We've been very intentional about moving slowly into the process of building both our knowledge and our brew system. As I'm already a tech geek, it is real easy for me to become a brewing geek as well and to go broke in the process. When we started collecting brewing equipment, I agreed to try to buy everything half price. Home Brew Findshas been invaluable when looking for the cheapest way to solve a brewing problem.
With the summer months and the need to lager a Dopplebock, I converted a 20 yr old dorm fridge into a fermentation fridge using 1.5" foam insulation.
Fermentation Fridge
Fermentation Fridge
And while this allowed me to lager, in this configuration it doesn't actually control the temperature so I went looking for a way to do that.
I settled the Elitech STC-1000 as it is a cheap alternative to the Johnson Controls controller (cache). Of course, the latter controller is a full package with power cord and power connectors for the cooling and heating units. The STC-1000 unit by contrast consists only of the controller and control panel. Oh, and it is Celsius only so you need to convert. But Google makes that easy ("convert 68 to celsius") My unit cost $35 to make while the Johnson is about $70.

To make use of the STC-1000, I had to build it into a package that allows for convenient use. Here's how I did it.

When I looked at the size of the STC-1000, it appeared the right size to fit in a standard outlet box (in the US and it was real close in size to the GFCI cutout. I bought a plastic cover to hold the GFCI and duplex outlet. I then modified to make the GFCI opening maybe 1/4" longer.

faceplate mod Mounted STC-1000


Next step was to mount the duplex outlet and wire it up. Keep in mind we need to control heating and cooling so we need to power the outlets individually. To do this, you have to break the copper tab on the black wire side of the outlet. I didn't take a before picture, but here it is after mod.
Outlet Modification

Now we can run a wire from the heating side to one outlet and from the cooling side to the other.

The other tricky piece is understanding that the STC-1000 only provides a relay service for activating the heating and cooling circuits - it doesn't actually supply power. I dealt with that by tapping the in-bound hot lead (black wire) to both the heating and cooling connectors. This is seen here with the first loop coming from the power in going to heating and the second loop from heating to cooling:
Outlet Modification

To power the outlet, I took a short wire from the heating to the outlet and from the cooling to the other outlet. The white wiring is pretty straight forward. Simply tap the in-bound white wire and connect it to one of the white lugs. No need for separate connections as the common wire is, um, common.
Finally, we add a tension-reliever to the box, run the temperature sensor through it, mount the outlet and buckle it up
tensioner Mounted STC-1000

Notes:
- I used a new orange 25' extension cord for the power side. I cut it in half and used wire from the unused half to do the wiring. I then added a new plug to the remaining cord so I had a usable cord.
- The STC-1000 was $19, the extension cord - $10, the box and cover $6. So this controller cost $35 plus two hours labor.
- Here is a wiring diagram
Wiring Diagram


Using getElementByID in Powershell

Thursday 07 of March, 2013
I was asked to pull a piece of information from a web page using Powershell. Fortunately for me, the item is tagged by an html "ID" element. While testing I discovered the following code worked when I stepped through it line by line, but failed when run as a script.
(Note 1: The following is a public example, not my actual issue. This snippet returns the number vote total for the question)

$ie = new-object -com InternetExplorer.Application
$ie.Navigate('http://linuxexchange.org/questions/832/programming-logic-question')
$ie.Document.getElementByID('post-832-score')|select innertext


The code is straightforward. It creates a new COM object to runs Internet Explorer. navigates to a specific page, then looks for a specific "id" tag in the html and outputs the value. The problem we saw was when we attempted to run the $doc.getElementByID command we received an error saying it could not be run on a $null object.
The question was asked during the Philly PowerShell meeting that perhaps the script needed to wait for the $ie.Navigate command to complete before moving on. And indeed this appears to be an asynchronous command. That is, PowerShell executes it but doesn't wait for it to complete before moving on to the next command.

The solution was the addition of a single line of code:
while ($ie.Busy -eq $true) { Start-Sleep 1 }

It simply loops until $ie is no longer busy.

The revised script looks like this:
$ie = new-object -com InternetExplorer.Application
$ie.Navigate('http://linuxexchange.org/questions/832/programming-logic-question')
while ($ie.Busy -eq $true) { Start-Sleep 1 }
$ie.Document.getElementByID('post-832-score')|select innertext
$ie.quit()



Click here for the full blog


Here are some special links


\\Greg

Created by gmartin. Last Modification: Tuesday 27 of December, 2016 17:16:22 EST by gmartin.