Getting old

2 comments

Maybe it's just me getting old and grouchy but it seemed this past Halloween that a significant percentage of kids at my door didn't even bother to say "Trick or Treat!" -- they just held up their bags expectantly as if to say "Look, we all know the drill.  Just give us the candy and we can move on with our lives."

Sigh...

Tutorial: unlock bootloader and gain root on Verizon HTC 10

2 comments

I picked up an HTC 10 a couple months ago but went with the Verizon contract version because the unlocked model was crazy expensive.  That meant I was stuck with Verizon's bloatware and stock recovery -- lame.

Fortunately, it's fairly easy to fix all that for $25.

WARNING: THE FOLLOWING GUIDE IS ONLY FOR U.S. VERIZON HTC 10 SMARTPHONES AND PC OWNERS.  I CANNOT BE HELD RESPONSIBLE FOR ANYTHING THAT GOES WRONG IN THE PROCESS OF FOLLOWING THIS TUTORIAL.  USING YOUR DEVICE IN THE WAYS DESCRIBED BELOW WILL LIKELY VOID YOUR WARRANTY.  PROCEED AT YOUR OWN RISK.

Okay, if you've made it this far you're pretty committed to doing this.  So, without further ado:

1. On your PC, get the necessary drivers by installing HTC Sync Manager (just choose the driver, not the GUI when prompted -- or uninstall the GUI after it installs)

2. On your phone, go to Settings > About > Software information > More and tap Build number about 7 times until it notifies you you've gain developer privileges

3. Go to Settings > Developer options and check USB debugging

4. Plug your phone into your computer using its USB cable and verify you can access the internal storage or SD card.

5.  Download the SuperSU recovery flashable zip file and copy it to your phone (SD card recommended)

6. On your phone, go to Settings > Security and check Unknown sources (and accept the resulting nag prompt)

7. Then open your smartphone browser and go to http://theroot.ninja/download.html and download SunShine for Motorola and HTC (version 3.4.2 or higher)

8. After it downloads, click on it to install it.  The app will walk you through the process.  It will first check to see if your device is compatible and will attempt to gain temporary root access.  During the root attempt, it will instruct you to turn off your screen for 15 seconds.  If the root attempt fails (like it did for me the first time), it will recommend you restart your phone and run the SunShine app again to try a second time.  It worked for me the second time.  Next it will ask if you want to unlock the bootloader with S-ON or S-OFF.  You can do your own online research to decide which to use, but most people go with the S-ON option (which is safer, doesn't require a reformat, and still allows you to install custom ROMs).  Once you've unlocked your bootloader, you can now go to Settings > Security and uncheck Unknown sources.

9. Now you're ready to gain root access.  On your PC, download and install Minimal ADB and Fastboot

10. Then download the img file for TWRP for HTC 10 to the adb install directory (C:\Program Files (x86)\Minimal ADB and Fastboot).

11. In the command prompt that opened after installing Minimal ADB and Fastboot, type: adb reboot bootloader


12. Your phone should reboot to an ugly Rainbow-Brite multi-colored text screen that says "Software status: Official" and "Unlocked".  At this point, in the command prompt type: fastboot flash recovery {name of img file from step 10 above}   (e.g. "fastboot flash recovery twrp-3.0.2-6-pme.img")

13. After a moment, it should say OKAY and finished.  Then, use the volume-down button on your phone to highlight BOOT TO RECOVERY MODE and press the power button to select it.  Your phone will reboot into the TWRP recovery screen

14. Activate write-mode by sliding the slider on the screen and then choose the Install button.  Locate the SuperSU zip file from step 5 above and press it to select it.  Note: you may need to press the Select Storage button to change to the SD card.  Slide the slider to confirm the flash install.

15. Once SuperSU is installed, reboot your phone.  Note: it may take a few minutes the first time booting after installing the app.

16. Open the Google Play Store on your phone and install Root Checker by CMDann.  Run it and click the Verify Root Access button at the top.  It may take a minute for SuperSU to finish installing and prompt you to allow Root Checker root access, but eventually you will be prompted and you should allow it root access.  Root Checker will then display a green checkmark indicating you have root access.

Congrats!  You have an unlocked bootloader with root access on a U.S. Verizon HTC 10 smartphone.



P.S. Xposed Framework also works:

A. Follow step 6 above and then install the Xposed apk

B. Download the latest Xposed arm64 SDK zip file to your SD card

C. Reboot to recovery and install the Xposed SDK zip file

D. Reboot and wait about 15 minutes for the install to complete (be patient, it really does take a long time and there's no initial sign of progress on the Verizon booting screen so it's easy to panic)

E. Open the Xposed app and verify it works (installing any desired modules), then uncheck the Unknown sources checkbox in Settings > Security

Goodbye named anchors

0 comments

Ever have that embarrassing moment when as an IT professional you realize you've been following a deprecated or unsupported process for years?  I had that a while back when I was still under the false impression you had to restart Windows to apply new updates to the PATH environment variable.

Well, this week it happened again: no more named anchors.


Old way:
<a href="https://www.blogger.com/null" name="top">Top of page</a>

New HTML5 way:
<h1 id="top"> My web page

<h1 id="top">
with the 'top of page' link being
<a href="https://www.blogger.com/blogger.g?blogID=7268743010500230881#top">Top of page</a>


Credit

Aurelia RC has arrived

0 comments


The first Aurelia Release Candidate has finally arrived.  It will be interesting to see how the JavaScript community responds...

Any sane i18n options for JavaScript?

1 comments


Although I like Aurelia in general, I have to admit their i18n process is rather complicated and inelegant.  This isn't Aurelia's fault -- I have yet to find any JavaScript library/framework that does a stellar job at localization/internationalization (especially for continuous builds/integration).  Any recommendations from my readers?


Update: I came across Localize.js and it works really well (although it's a commercial product so it may not be the right fit for everyone).

Pro Tip: The Joel Test

0 comments

Most checklists about software development teams are usually too complicated or too buzzword-hyped.  This list is an exceptional outlier and spot-on: The Joel Test

  1. Do you use source control?
  2. Can you make a build in one step?
  3. Do you make daily builds?
  4. Do you have a bug database?
  5. Do you fix bugs before writing new code?
  6. Do you have an up-to-date schedule?
  7. Do you have a spec?
  8. Do programmers have quiet working conditions?
  9. Do you use the best tools money can buy?
  10. Do you have testers?
  11. Do new candidates write code during their interview?
  12. Do you do hallway usability testing?

A score of 12 is perfect, 11 is tolerable, but 10 or lower and you've got serious problems. The truth is that most software organizations are running with a score of 2 or 3, and they need serious help, because companies like Microsoft run at 12 full-time.

Linux has a long way to go...

0 comments


I'm a big fan of Linux but I'll be the first to admit it has a long way to go.  I came across this page which does a great job highlighting the shortcomings and providing a roadmap for improvement: http://itvision.altervista.org/why.linux.is.not.ready.for.the.desktop.current.html

(updated) Distributed File System benchmark

13 comments

Note: this is an update to my previous test


I'm investigating various distributed file systems (loosely termed here to include SAN-like solutions) for use in Docker, Drupal, etc. and couldn't find recent benchmark stats for some popular solutions so I figured I'd put one together.

Disclaimer: This is a simple benchmark test with no optimization or advanced configuration so the results should not be interpreted as authoritative.  Rather, it's a 'rough ballpark' product comparison to augment additional testing and review.


My Requirements:
  • No single-point-of-failure (masterless, multi-master, or automatic near-instantaneous master failover)
  • POSIX-compliant (user-land FUSE)
  • Open source (non-proprietary)
  • Production ready (version 1.0+, self-proclaimed, or widely recognized as production-grade)
  • New GA release within the past 12 months
  • Ubuntu-compatible and easy enough to set up via CloudFormation (for benchmark testing purposes)

Products Tested:
Others:

AWS Test Instances:
  • Ubuntu 14.04 LTS paravirtual x86_64 (AMI)
  • m1.medium (1 vCPU, 3.75 GB memory, moderate network performance)
  • 410 GB hard drive (local instance storage)

Test Configuration:

Three master servers were used for each test of 2, 4, and 6 clients.  Each client runs a small amount of background disk usage (file create and update):

(crontab -l ; echo "* * * * * ( echo \$(date) >> /mnt/glusterfs/\$(hostname).txt && echo \$(date) > /mnt/glusterfs/\$(hostname)_\$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 25 | head -n 1).txt )") | sort - | uniq - | crontab -

Results of the three tests were averaged.  Benchmark testing was performed with bonnie++ 1.97 and fio 2.1.3.
Example Run:
$ sudo su -
# apt-get update -y && apt-get install -y bonnie++ fio 
# screen 
# bonnie++ -d /mnt/glusterfs -u root -n 1:50m:1k:6 -m 'GlusterFS with 2 data nodes' -q | bon_csv2html >> /tmp/bonnie.html
# cd /tmp
# wget -O crystaldiskmark.fio http://www.winkey.jp/downloads/visit.php/fio-crystaldiskmark
# sed -i 's/directory=\/tmp\//directory=\/mnt\/glusterfs/' crystaldiskmark.fio
# sed -i 's/direct=1/direct=0/' crystaldiskmark.fio 
# fio crystaldiskmark.fio
Translation: "Login as root, update the server, install bonnie++ and fio, then run the bonnie++ benchmark tool in the GlusterFS-synchronized directory as the root user using a test sample of 1,024 files ranging between 1 KB and 50 MB in size spread out across 6 sub-directories.  When finished, send the raw CSV result to the html converter and output the result as /tmp/bonnie.html.  Next, run the fio benchmark tool using the CrystalDiskMark script by WinKey referenced here."

Important Notes: 

1.  Only GlusterFS and LizardFS could complete the intense multi-day bonnie++ test.  The others failed with these errors:
  • CephFS (both kernel and fuse)
    • Can't write block.: Software caused connection abort
    • Can't write block 585215.
    • Can't sync file.
  • SXFS
    • Can't write data.
2.  GlusterFS and LizardFS had significant differences in bonnie++ latency which couldn't be shown on the graph without distorting the scale:


Seq Create (sec)Rand Create (sec)
GlusterFS 3.7.6173164
LizardFS 3.9.4 33


3.  GlusterFS took at least twice as long as LizardFS to complete the bonnie++ tests (literally 48 hours!).  Switching to xfs out of curiosity helped performance significantly (less than 24 hours), however all tests were done with ext4 (Ubuntu default).

4.  CephFS did not complete the "Rand-Read-4K-QD32" fio test



Results (click to view larger image):







(Note: raw results can be found here)

_______________________________________________________


Concluding Remarks:
  • Since GlusterFS and LizardFS were the only ones that could complete the more intense bonnie++ test, I would feel more confident recommending them as "production ready" for heavy, long-term loads.
  • Also (as mentioned above), LizardFS was much faster than GlusterFS (at the cost of higher CPU usage).
  • In terms of setup and configuration, GlusterFS was easiest, followed by LizardFS, then SXFS, and finally (in a distant last place) CephFS.
  • SXFS shows promise but they'll need to simplify their setup process (especially for non-interactive configuration) and resolve the bonnie++ failure.
  • My overall recommendation is currently LizardFS GlusterFS.  (Update: I have stopped recommending LizardFS because metadata HA is not currently supported out of the box -- see comments below).


What's the difference between KB and KiB?

0 comments


I've always wondered what the big deal was between 1000 and 1024.  Some hard drive manufacturers and cloud providers stress they support one or the other but I never understood why.  It's just 24 bytes, right?  Well, it adds up and makes a big difference.  Here's a nice overview: http://blog.pulsedmedia.com/2015/09/kibibyte-kilobyte-the-divider-1024-and-1000/

Securely share dynamic secrets between Linux computers

0 comments


UPDATE: the curl.io service no longer works but the concept demonstrated below still works with a similar service like fh.tl

____________

I needed to set up password-less ssh access between a cluster of AWS Linux computers via CloudFormation.  Although ssh-copy-id was designed to help with this, it still presumes you have a login password which complicates things with design-time scripting, like CloudFormation.

Here was the solution I came up with (using a generic example of a random secret file):

On first server:

PRIVATEFILE='/tmp/secret.txt'
 PRIVATEPASSWORD='myrandompassword'
PUBLICTOKEN=globallyuniquepublicstring
PUBLICCURLIOTOKEN='v2ioebm0'

CURLIO=$( ( gpg --cipher-algo AES256 --symmetric --yes --batch --passphrase=${PRIVATEPASSWORD} -c ${PRIVATEFILE} && curl -F "file=@${PRIVATEFILE}.gpg" https://curl.io/send/${PUBLICCURLIOTOKEN} ) 2>&1 | grep '^https' )

test -n ${CURLIO} && ( curl -s "https://scry.in/api.php?action=shorturl&format=simple&keyword=${PUBLICTOKEN}&url=${CURLIO}" > /dev/null ) && rm "${PRIVATEFILE}.gpg"


On some other server(s):

PRIVATEFILE='/tmp/secret.txt'
 PRIVATEPASSWORD='myrandompassword'
PUBLICTOKEN=globallyuniquepublicstring

curl -s $( curl -s "https://scry.in/${PUBLICTOKEN}" | grep -oh 'https.*"' | head -1 | sed -e 's/"$//' ) | gpg --quiet --no-use-agent --yes --batch --passphrase=${PRIVATEPASSWORD} -o ${PRIVATEFILE}


Notes:

  1. This is obviously best for sharing dynamic secrets that aren't known ahead of time when creating the CloudFormation script (like ssh keys).  Static secrets could have been simply hard-coded into the CloudFormation script directly.
  2. You'll want to protect your CloudFormation script since it will have the gpg password hard-coded.
  3. The space in front of the PRIVATEPASSWORD environment variable is to avoid saving it in the bash history.  Feel free to avoid the environment variable altogether and just insert the password into the commands where referenced.
  4. The PUBLICCURLIOTOKEN is randomly generated when you visit https://curl.io/ (right after "send/" in the example code snippet on the homepage).  Feel free to use the one in my example above -- I don't think it ever expires.
  5. For PUBLICTOKEN I recommend using the GUID from http://www.guidgen.com/

Crossfire

0 comments

crossfire game

In honor of the classic Milton Bradley game that my wife got me for Christmas (after spending a ridiculous amount on eBay), I thought I'd re-post this hilarious article from cracked.com that is no longer available on their site (slightly edited):





Crossfire 


This board game was created by Milton Bradley in 1971, though it was the indoctrination campaign of the 1990's which would ultimately etch this name into our minds for eternity. Playing requires only eyes, hands, and a will of steel.


Confused?  Let's ensure you're in the right place before moving on:


Just The Facts
  1. Crossfire has a 50% mortality rate
  2. Marbles + Crack x American Gladiators = Crossfire
  3. Many consider it a part of their childhood (even if they've never actually played it)

It's some time in the future...
You run through your mental checklist as you fly high above the elongated octagon of the Crossfire Arena on your triangle roller-puck.
Leather jacket? Check!
Fingerless gloves? Check!
Totally radical attitude? Raditude, check!
The chanting crowd of fist-pumping lost souls is drowned out only by the jarring barks of thunder. Lightning dances dramatically down the blackened skies as if to meet the rising flames half way, casting an eerie purple glow on the scene. The Arena shrinks down to combat size, and you are face to face with your opponent. "Poor wimp" you think, grasping the turret-mounted gun in front of you. It feels like shaking hands with an old friend.
"CROSSFIYAH!" is declared by the Overlord, signaling the beginning of the match. You load and immediately begin sending hot, chrome-laden doom toward your opponent. He responds with a torrent of well aimed silver retribution.The Arena is a blur of purple and silver; the organized chaos captivates the cheering hordes.
Tension grows as both pucks spin closer to their respective goals. Your blistered hands are on fire - you must ignore this for now. Loading. Shooting. Loading. Shooting. Eyes ablaze; no time to wipe away the dripping sweat. Tunnel vision sets in as adrenaline courses through your protruding veins. You know nothing--but to continue. Must continue!

Mom hates it when we Crossfire in the living room
CLUNK!
Dazed, you hardly even notice as the puck sinks. A deafening "CROSSFIYAHHHH!!!!" signals the end of the match. You watch as your neighborhood friend, turned mortal enemy, is banished to a fiery, spiraling oblivion. "Yeah! Yeah!" you declare, boastfully thrusting your fist to the heavens as the exaltation of victory washes over you.
Welcome to the world of CROSSFIRE.

Cracked on Crossfire
Powerless in the face of pure marketing genius
If you lived through the '90s, chances are you've encountered Crossfire's iconic commercial, which saturated Nickelodeon's airwaves at the time. Representing one of the world's first successful ventures into "X-treme" style advertising, its promise of danger and glory quickly caught the attention of pre-pubescent boys everywhere. If you are anything like us, you wanted this "rapid fire shoot-out game" more than a Super Soaker 100, but didn't get it until your sadistic parents gave it to you as a sick joke when you turned seventeen.
"Remember that thing you wanted more than anything, son? Surprise! We bought it for you eight years ago, and thought it would be a riot to wait until you didn't want it anymore to give it to you!"
Irony makes a terrible birthday present.

Gameplay:

The game's brilliance lies in the combination of ball grabbing and shooting; two of the most testosterone-fueled gestures known to man. Each player starts off with twenty marbles, and tries to shoot the polygonal pucks into their opponent's goal. It's a race to three points, and aggression is chiefly rewarded. The action is non-stop and fast-paced and other hyphenated words, consisting of constant reloading and shooting until a winner emerges. Gun jams are a common occurrence, but much like disobedient women and children, can be corrected easily with a little well placed smackage.
Tearing families apart since '71

The Legend:

Believe it or not, Crossfire wasn't always all lightning and flaming death. Far removed from the "I sent all my friends spinning into the ether, so now I wander this earth alone" style for which it is known, family togetherness was once the main theme.
Suburban, wholesome, family, togethernezzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
Not. One. Explosion.
In fact, it wasn't until 1992, twenty one years after its original release, that Milton Bradley began to change the image of Crossfire. The first attempt was quite impotent, with lame effects and a cheesy song. In truly artful protest, the lead singer pronounces "Crossfire" as "Fart fire" throughout the tune, displaying the self-aware contempt only a man stuck in a terrible '90s commercial could ever know.




In comparison, its better known counterpart has been called a thirty-second movie. The opening line "It's sometime in the future..." raises far more questions than it answers, and just seems unnecessary; but for kids, its ambiguity hinted at a deeper plot, and the open-ended storyline fired up their imaginations.
To brilliant effect, the ad was used merely as the primer for an epic to be continued in the young viewer's mind, lasting long after the ad itself ended. The unprecedented potency of the commercial is why Crossfire was so desired by kids at the time, and why it triggers such nostalgia in people today.
The same tactic was used in this classic:
It's like the more sophisticated big brother of the Crossfire ad in that it goes so far as to incorporate rudimentary metaphor (our "fire-demon" is "drinking alone", and it has yet to be slain), but there's something about the uncanny parallels that are almost.....suspicious.
[Updated on: Today at 6:07am]
Jesse Ventura has informed Cracked that Crossfire was a training & indoctrination device, set up by the Marine Corps to create a generation of American "super soldiers" by developing hand-eye coordination, tactical decision-making skills, and raw killer instinct early on in future prospects.
A random Ron Paul supporter we met online confirms this.
Good enough for us
Out of respect for the legions of people that grew up in the 1990s, still longing for that Crossfire glory which will never come, we'll fight the urge to be hilarious here and end on a serious note...



Where have all the space simulators gone?

0 comments


2011 was a bad year for astronomy education.  It marked the end of development for Celestia, the first astronomy simulator that I ever fell in love with.  Spin-offs like Celestia.sci and Celestia161-ED slowly died on the vine after that.

A year before, Orbiter had gone dark.  Two years later, Digital Universe Atlas (demonstrated in the video above) development ended in favor of a commercial planetarium product.

Sky in Google Earth and Sky-Map only allow 2-D navigation and panning (flat-earth believers rejoice!).

Kerbal Space Program has mediocre graphics.  Likewise with Pioneer.

SpaceEngine looks promising but it's proprietary (owned by a single Russian developer) with an uncertain free beta status and very commercial aims [1] [2] [3].  Outerra, Starry Night, and Redshift are similarly commercial.

In addition to being commercial, Universe Sandbox by design is more about astrophysics than depicting the actual universe.

So, where have all the cool space simulators gone?  Where are all the brilliant retired astronomers, physicists, and OpenGL space animators congregating?  Is open source too unrealistic for this product category?  Please comment and vote for your favorite!

Update: WorldWide Telesecope looks promising (especially since Microsoft recently open sourced it and switched ownership to a neutral governing body) but it uses image stitching (similar to KStars) instead of vector 3D space so you don't get the star-flyby effect which I feel degrades the "space travel" experience.

Robot can solve a Rubik's cube in less than 1.1 seconds

0 comments

2016 predictions

0 comments


I normally don't share lengthy videos, but I stumbled across this one by Joe Colantonio and I was surprised how broadly applicable it was considering the topic and title seemed very specific.  I recommend watching the whole thing even if you're not specifically interested in test automation:

Sync Minecraft worlds to multiple Windows machines/accounts using Dropbox

1 comments


My kids like playing Minecraft but we have multiple computers and laptops and they were asking how to get all their worlds regardless of the machine they're on.  We have a Dropbox subscription but unfortunately the official documentation for syncing multiple folders outside of the single Dropbox sync folder involves creating a Windows shortcut which won't work for Minecraft's Java logic (because the shortcut appends a hidden *.lnk extension so %appdata%\.minecraft\saves becomes %appdata%\.minecraft\saves.lnk and Minecraft won't see it and will create a new empty saves folder).

Fortunately, there's a fairly easy solution (which will work for syncing any number of extra folders to Dropbox):

1.  If you're on Windows 8 or older, install the Microsoft Visual C++ 2005 Redistributable Package (note: 64-bit machines should install BOTH the 32-bit AND 64-bit versions)

2.  Download and install the free Link Shell Extension program

3.  Click Start and in the run box type %appdata%/.minecraft and hit Enter

4.  Click on your Dropbox Desktop icon and click the Dropbox Folder link


5.  Create a folder in the Dropbox folder called minecraft-worlds

6.  Move the saves folder from the .minecraft directory to the new Dropbox minecraft-worlds folder

7.  Right-click the Dropbox saves folder and choose Pick Link Source


8.  In the .minecraft directory, right-click in any blank white area and choose Drop As... > Symbolic Link (accept the Security prompt if it appears)


9.  Run the Minecraft game and make sure your worlds are still available and work as expected.  Then close Minecraft.

10.  Go to another computer and repeat Steps 1-4 (or another account on the same computer do Steps 3-4).

11.  Move the contents of the .minecraft saves folder into the existing Dropbox minecraft-worlds/saves folder then delete the empty .minecraft saves folder

12.  Repeat Steps 7-9

13.  If you have any other computers or accounts, continue repeating Steps 10-12

Enjoy!

Drupal Storage API and AWS S3 tutorial

11 comments


By default, Drupal supports a local public and private file system for storing user uploaded files (images, pdfs, etc.).  While it works well for most use cases, there are disadvantages.  For example, it's nearly impossible to switch from public to private or vice versa.  Once you make your choice, you're stuck with it.  Also, it's somewhat limiting in the modern cloud era with cheap 3rd party cloud storage.  If you have disk space constraints and are considering moving your files to the cloud, the two most popular active Drupal options are S3 File System (s3fs) and Storage API (storage).

s3fs module

    Pros:

  • Active development (three main developers)
  • Uses the official AWS SDK (version 2 only, though)
  • Easy to set up and use
  • Provides a migrate mechanism
    Cons:

storage module

    Pros:
    Cons:

    After weighing the pros and cons, I eventually decided to go with Storage API.  Here's how to migrate an existing site file system to AWS S3 using that module:

    1.  Download the necessary modules: drush dl imageinfo_cache storage_api-7.x-1.x-dev storage_api_stream_wrapper-7.x-1.x-dev storage_api_populate-7.x-1.x-dev

    3.  Optionally apply this fix to suppress a false positive nag error

    4.  Enable the modules: drush en storage storage_stream_wrapper storage_api_populate imageinfo_cache

    5.  Go to /admin/config/media/file-system and change the default download method to Storage API (public or private depending on your site needs)



    6.  Now, for the somewhat labor-intensive step: update all your content type fields that rely on the file system to use Storage API.  For example, edit the Article Drupal content type (/admin/structure/types/manage/article/fields) and edit the Image field and change the upload destination to Storage API (public or private depending on your site needs)

    7.  Once all your content types are updated to use Storage API, you're ready to have your existing files managed by Storage API.  Go to /admin/structure/storage/populate and check Migrate all local files and Confirm and then click Start

    8.  After the process update completes, you can disable the populate module: drush dis storage_api_populate

    9.  Now that all your static files are managed by Storage API, you need to migrate your dynamic image styles: drush image-generate  Choose all for any prompts:


    10.  Once the image styles have been generated (which may take a while to complete), you're ready to verify the migration.  Move everything in the site's files directory except the storage folder and the .htaccess file to a temporary backup location and then run drush cc all && drush cron

    11.  Now, verify the site functions normally.

    Congratulations, you've updated your site to use Storage API!

    ...But you're probably thinking, "Okay, so what's the big deal?  The site looks the same and it just seems like all the files moved into a new folder called storage.  So what?!"

    Well, get ready to experience the awesome power of Storage API by migrating your file system to AWS S3!  (or you could just as easily move them to Rackspace, etc. using the same process...)

    1.  First, you'll need an AWS account with IAM permissions to create S3 buckets and use CloudFront:

    {
    "Statement": [{
    "Sid": "ModifyAssets",
    "Action": [
    "s3:DeleteObject",
    "s3:DeleteObjectVersion",
    "s3:GetObject",
    "s3:GetObjectAcl",
    "s3:GetObjectVersion",
    "s3:GetObjectVersionAcl",
    "s3:PutObject",
    "s3:PutObjectAcl",
    "s3:PutObjectVersionAcl"
    ],
    "Effect": "Allow",
    "Resource": [
    "arn:aws:s3:::yourbucketname/*"
    ]
    }, {
    "Sid": "BucketRights",
    "Action": [
    "s3:ListBucket",
    "s3:ListAllMyBuckets"
    ],
    "Effect": "Allow",
    "Resource": [
    "arn:aws:s3:::*"
    ]
    }]
    }


    {

        "Sid": "Stmt1450391402000",

        "Effect": "Allow",

        "Action": [

    "cloudfront:CreateDistribution",
    "cloudfront:CreateInvalidation",
    "cloudfront:DeleteDistribution",
    "cloudfront:GetDistribution",
    "cloudfront:ListDistributions",
    "cloudfront:UpdateDistribution",
    "cloudfront:ListInvalidations"
    "cloudfront:ListStreamingDistributions"
        ],
        "Resource": [
    "*"
        ]
    }

    2.  Once the account is created with the necessary IAM permissions, you'll need to create an access key:



    3.  Once you have your access key ID and Secret, go to your Drupal site and browse to /admin/structure/storage/create-container

    4.  Choose Amazon S3 from the service dropdown and click Next

    5.  Provide your access key ID, Secret, and a globally unique bucket name (I recommend a name that does NOT include a dot [.] since that's interpreted as a subdomain).  In addition, select the AWS region you want to create the bucket in.  Finally, make sure to check the Serve with CloudFront checkbox (note: streaming with CloudFront is out of scope for this tutorial).  You can optionally select the Reduced redundancy checkbox for cheaper 99.99% durability.  Then click Create.


    Note: it may take up to 20 minutes for the CloudFront processing to complete on the AWS backend but you can continue the setup process below immediately:

    6.  Go to /admin/structure/storage/create-class and give it a descriptive name like "Cloud" (keep Initial container Filesystem for performance reasons) and then click Create class


    Note: like others, I have no idea what the other checkboxes do so leave them unchecked.

    7.  On the subsequent screen, choose Amazon S3 (the container you created in the step above) from the dropdown and then click Add


    8.  Now, go to /admin/structure/storage/stream-wrappers and click edit for Public, Private, or both (depending on your use case) and change the Storage class to Cloud




    9.  Finally, run drush cron to actually push your local files to the AWS S3 bucket.  This may take a while so I strongly recommend using drush instead of the Drupal web interface to run cron.

    10.  Verify the site functions as expected.  The images should now be served from amazonaws.com or cloudfront.net

    11.  Celebrate faster page load times and more file system redundancy!  Also, now that your files are in S3, you can even set up a backup strategy for Infrequent Access or Glacier.

    Lubuntu 15.10 post-install configuration

    0 comments


    After Lubuntu 15.10 was successfully installed, there were still some tweaks I needed to make to be fully satisfied.  Feel free to use or ignore as you see fit:

    Run system update via Start > System Tools > Software Updater

    Set up video card

    Add/Remove applications:
    sudo apt-get install chromium-browser default-jdk libreoffice thunderbird keepassx audacity git xscreensaver xscreensaver-gl-extra p7zip-full shutter avidemux
    sudo apt-get remove gnumeric abiword firefox pidgin sylpheed xpad
    sudo apt-get install libdvd-pkg
    sudo dpkg-reconfigure libdvd-pkg

    Right-click time in bottom-right corner > Select Digital Clock Settings > Change clock format from %R to %l:%M %p

    Change the screen saver: Start > Preferences > Screensaver

    Change the launch bar:
    Right-click the File Manager icon in bottom left corner > Select Application Launch Bar settings > add Thunderbird, Leafpad, KeepassX and LXTerminal

    Change the desktop wallpaper:
    Search for a desired wallpaper and save it to your hard drive.  Right-click desktop and select Desktop Preferences > Appearance tab > Click button next to current Wallpaper image and browse for your new wallpaper.

    Change the default view:
    Open file manager > Edit > Preferences > General tab > set Default View to Detailed List View

    Set Thunderbird to reply to messages above quoted text:
    Open Thunderbird > Edit > Preferences > Advanced tab > General tab > Config Editor... button > search for default.reply_on_top and set the value to 1

    Add printer:
    For my HP Officejet Pro 8100, I needed the HP drivers:
    sudo apt-get install hplip printer-driver-hpijs hplip-gui && sudo hp-setup
    Start > System Tools > Printers  -->  right-click  -->  Set as Default

    Install Chromium plugins:
    https://www.eff.org/https-everywhere
    https://chrome.google.com/webstore/detail/adblock-plus/cfhdojbkjhnklbpkdaibdccddilifddb
    https://chrome.google.com/webstore/detail/wappalyzer/gppongmhjkpfnbhagpmjfkannfbllamg
    https://chrome.google.com/webstore/detail/clear-cache/cppjkneekbjaeellbfkmgnhonkkjfpdn
    https://chrome.google.com/webstore/detail/ghostery/mlomiejdfkolichcflejclcbmpeaniij
    https://chrome.google.com/webstore/detail/recent-bookmarks/olndffocioplakeilhkgenfgdincjlpn

    Edit keyboard shortcuts:  sudo leafpad ~/.config/openbox/lubuntu-rc.xml 

    after the <!-- Keybindings for desktop switching -->  line add:

    <keybind key="C-l">
         <action name="Execute">
               <command>xscreensaver-command -lock</command>
         </action>
    </keybind>

    then, replace:

       <!-- Take a screenshot of the current window with scrot when Alt+Print are pressed -->
        <keybind key="A-Print">
          <action name="Execute">
            <command>scrot -u -b</command>
          </action>
        </keybind>  

    with:

      <!-- Take a screenshot of the current window with shutter when Alt+Print are pressed -->
        <keybind key="A-Print">
          <action name="Execute">
            <command>shutter -a -e</command>
          </action>
        </keybind>

    then, replace:

      <!-- Launch scrot when Print is pressed -->
        <keybind key="Print">
          <action name="Execute">
            <command>scrot</command>
          </action>
        </keybind>  

    with:

      <!-- Launch shutter when Print is pressed -->
        <keybind key="Print">
          <action name="Execute">
            <command>shutter -f -e</command>
          </action>
        </keybind>
       

    ...then save and close the XML file

    Configure shutter:
    create a folder called screenshots in your home directory
    open Start > Accessories > Shutter
    from the menu, select Edit > Preferences
    Main side-tab:
    Filename: %Y%m%d-%NN
    Directory: screenshots
    Actions side-tab:
    Open with: place checkmark next to Built-in Editor
    Behavior side-tab:
    uncheck Display pop-up notification after taking a screenshot
    click the Close button and main shutter window to save the changes

    Logout and log back in to test your changes (Window+L will lock your screen, Print Screen will bring up the entire screen in shutter, and +Print Screen will bring up the active window in shutter)