Loading...
 
(Cached)

MartinsDen

Welcome to gmartin.org

High Resolution screen, Remote Desktop and VirtualBox

Wednesday 27 of September, 2017

I bought a 2016 Yoga laptop with a hi-res (3200 x 1800) screen. I'm running the Windows Insider builds of Windows 10. Running a hi-res screen turns up several issues with apps that aren't prepared for all the resolution. One area I had an issue with was remote desktop which I'venow fixed.


The frst thing you do to deal with this is change Windows desktop scaling factor . Windows recommends 250% scale factor for my machine and I'm using that. The next thing to do is read a great reference from Scott Hanselman on living the hi-res lifestyle.


What I exerienced with Remote Desktop with another Win 10 machine was a small window unreadable to my 55 and over eyesight. 



Doing more research, I came across this article from Falafel on Remote Desktop and Hi-Res.  The tip from Falafel is to make use of Remote Descktop Connection Manager and configure the desplay settings to use Full Screen. This will scale thee remote desktop window to match your local screen and it solved my problem.


The last issue was VirtualBox.  One of my remote PCs has a virtualBox VM running Slackware. After scaling the remote desktop I opened the VM and it had not scaled.  After saying "hmmm", I went poking around the display settings for the VM  I found the Scale Factor setting. Setting this  to 200% gave me a usuable VM in a remote desktop session.


Powershell on Linux

Monday 18 of September, 2017

I've been learning a lot about Microsoft's Linux initiatives over the past couple weeks.  I've started using Windows Services for Linux in lieu of putty for connecting to my Linux machines and recently started playing with their PowerShell implementation on Linux.  Last week I had a need to do some scripting on Linux and wanted to re-use some code I had on hand. 


PowerShell can be installed from the repository on most machines.  The PowerShell github page has the details on how to configure your package manager to draw directly from the repository.


For my challenge, I wanted to profile the download speed of a particular website I help manage.  I already have a PS script that does most of what I wanted.  It was a simple task of reconfguring it and testing to be sure all the features were available in the current Linux PS beta.  Here's the script.



$url = "http://files.myfakeco.com/downloads/filedownload/d3974543.zip"


$timer = measure-command {

    $req = system.Net.WebRequest::Create($url)

    try {

        $res = $req.GetResponse()

        $requestStream = $res.GetResponseStream()

        $readStream = New-Object System.IO.StreamReader $requestStream

        $data=$readStream.ReadToEnd()

    }

    catch System.Net.WebException {

        $res = $_.Exception.Response

    }

}

$sec =  $($timer.totalmilliseconds)/1000

$size = $res.Contentlength

$kbs =  "{0:N2}" -f ($size/$sec/1000)

$ssec =  "{0:N2}" -f $sec


echo "$size, $ssec, $kbs"

"$(get-date -f "MM-dd-yyyy hh:mm tt"), $($res.StatusCode), $size, $ssec, $kbs `r`n"|out-file -append /mnt/samba/Docs/dllog.txt


The script makes use of the .Net WebRequest API. The API downloads the file and reorts status and stats derived from timing the download using measure-command.


But the best part of this is that the exact code runs on Windows Powershell.  I only modified the code to meet my specific needs for this report.


Fun with WSL (Ubuntu on Windows)

Tuesday 15 of August, 2017

I'm running WIndows 10 1703 and have been toying with the Windows Subsystem for Linux (WSL). THis version is based on Ubuntu.  There is some fun it making it useful.  



SSH into WSL



I want to use putty from anywhere to access the shell. SSH requires a few things to make it useful.  Start the bash shell and edit /etc/ssh/sshd_config (sudo nano /etc/ssh/sshd_config)



  • Change the listener.
    • port 2222


  • Turn on Password Authentication (I'll discuss key auth in a bit)

  • Turn off Privilege separation. Rumor has it it isn't implemented

  • Allow TCP port 2222 in the Windows Firewall

  • Generate host key
    • sudo ssh-keygen -A


  • Restart ssh service
    • sudo service ssh --full-restart


You should be able to ssh into the host.


 


 


 


 


 


Using Powershell to post data to IFTTT WebHooks service

Monday 07 of August, 2017

IFTTT has many useful triggers and I like Webhooks because it can enable so many fun interactions.  My goal today is sending JSON key:value pairs to WebHooks from Powershell (my preferred scripting language and now available on Linux!).  


WebHooks will accept three named parameters vis JSON (also form data and url parameters) that can be referenced within the Action of your applet.  The paramaeters are named value1, 2 & 3. so the JSON should look like this: 


{
    "value1":  "Good Morning",
    "value3":  "That is all.",
    "value2":  "Greg"
}


PowerShell has two methods for posting this to a URL Invoke-WebRequest and Invoke-Restmethod.  The latter is apparently a wrapper of the former and return onthe the string output from the POST. Because of the possible error-checking needs, I'll focus on Invoke-WebRequest.  


Here is the code that made this work:


$BaseURL = "https://maker.ifttt.com/trigger/GMhit/with/key/enteryourkeyhere"


  1. Note: The key (last part of URL is user unique

  2. The Trigger here is GMhit and unique to me. You would declare your own in the IFTTT service

$body = @{ value1="Good Morning" value2="Greg" value3="That is all." }



  1. Either works. Webrequest return status code

  2. Invoke-RestMethod -URI $BaseURL -Body (ConvertTo-Json $body) -Method Post -ContentType application/json

Invoke-WebRequest -URI $BaseURL -Body (ConvertTo-Json $body) -Method Post -ContentType application/json


Notes:



  • Setting the ContentType to `application/json` is important here.  This call didn't work until this was set correctly.

  • The value names are fixed and cannot be customized.


Recovering from a Bad Drive in a Greyhole storage pool

Monday 13 of February, 2017

I run an Amahi home server which hosts a number of web apps (inlcuding this blog) as well a a large pool of storage for my home.  Amahi uses greyhole (see here and here) to pool disparate disks into a single storage pool. Samba shares are then be added to the pool and greyhole handles distributing data across the pool to use up free space in a controlled manner.  Share data can be made redundant by choosing to make 1, 2 or max copies of the data (where max means a copy on every disk).


The benefit over, say, RAID 5 is that 1) different size disks may be used; 2) each disk has its own complete file system which does not depend on disk grouping; 3) each file system is mounted (and can be unmounted) separately or on a different machine.


So right before the holidays, the 3TB disk on my server (paired with a 1 TB disk) started to go bad.  Reads were succeeeding but took a long time.  Eventually we could no longer watch video files we store on the server and watch through WDTV.  Here is how I went about recovering service and the data (including the mistakes I made).



  • Bought a new 3TB drive and formatted it with ext4 and mounted it (using an external drive dock) and added it to the pool as Drive6.

  • Told greyhole the old disk was going away (drive4)
    greyhole --going=/var/hda/files/drives/drive4/gh

    Greyhole will look to copy any data off the drive that is not copied elsewhere in the pool. It has no effect on the data on the `going` disk (nothing is deleted) except it could cause further damage. The command ran for several days and due to disk errors didn't accomplish much, so I killed the process and took a new tact.

I decided to remove the disk from the pool and attempt an alternate method for recovering the data. 


  • Told greyhole the drive was gone.
    greyhole --gone=/var/hda/files/drives/drive4/gh 
    Greyhole will no longer look for the disk or the data on it.  It has no effect on the data on disk. 


  • Used safecopy to make a drive image of the old disk to a file on the new disk. (if you not used safecopy, check it out.  It will run different levels of data extraction, can be stopped and restarted using the same command and will resume where it left off.
    safecopy --stage1 /dev/sdd1 /var/hda/files/drives/Drive6/d1 -I /dev/null


This took about two weeks to accomplish due to drive errors.  And because I was making a disk image, I eventually ran out of space on the new disk before it completed.



  • Bought a  4TB drive and mounted it using an external dock as drive7; copied over and deleted the drive image from the Drive6.


  • Marked the 1TB drive (drive5) as going (see command above) and gone. This moved any good data off the 1TB drive to drive7 but left plenty of room to complete the drive image.




  • Swapped drive5 (1TB) and drive7 (4TB) in the server chassis. Retired the 1TB drive.




  • Mounted the bad 3TB drive in the external dock and resumed the safecopy using:
    safecopy --stage1 /dev/sdd1 /var/hda/files/drives/Drive7/d1 -I /dev/null



  • Mounted the drive image. The base OS for the server is Fedora 23. The drive tool inlcudes a menu item to mount a drive image.  It worked pretty simply to mount the image at /run/media/username/someGUID.




  • Used rsync to copy the data form the image to the data share.  I use a service script called mount_shares_locally as the preferred method for putting data into greyhole pool is by copying it to the samba share.  The one caveat here is that greyhole must stage the data while it copies it to the permanent location. That staging area is on the / partition under /var/hda.  I have about 300GB free on that partition so I had to monitor the copy and kill the rsync every couple hours. Fortunately, rsync handles this gracefully which is why I chose it over a straight copy.



rsync -av "/run/media/user/5685259e-b425-477b-9055-626364ac095e/gh/Video"  "/mnt/samba/"



 


A couple observations.  First, because of the way I had greyhole shares setup, I had safe copies of the critical data. All my docs, photos and music had a safe second copy. The data on the failed disk was disposable.  I undertook the whole process because I wanted to see if it would work and whatever I recovered would only be a plus.  


This took some time and a bit of finesse on my part to get the data back.  But I like how well greyhole performed and how having the independent filesystems gave me the option to recover data on my time. Finding safecopy simplified this a lot and added a new weapon to my recovery toolkit!.


 


 




Click here for the full blog


Here are some special links


\\Greg

Created by gmartin. Last Modification: Tuesday 27 of December, 2016 17:16:22 EST by gmartin.