Saw a great GitHub Gist shared on twitter by @pof for using crontab and curl to check for the Google Play page of the Nexus 5. I added a few tips on getting everything setup for those who might not have postfix running or used crontab recently.

Everything is also on my Gist, which I forked from poliva and added the same tips in a comment:

Setting up postfix to work with Google Apps:

bash script forked from poliva:

EMAIL="[email protected]"
mkdir -p /tmp/googleplay/
rm /tmp/googleplay/after 2>/dev/null
mv /tmp/googleplay/now /tmp/googleplay/after 
curl "${URL}" -o /tmp/googleplay/now
len=`diff /tmp/googleplay/now /tmp/googleplay/after |wc -l`
if [ $len != 0 ]; then 
        echo "${URL}" > /tmp/content.txt
        cat /tmp/googleplay/now > /tmp/che.html
        /usr/bin/mutt -x -s "Nexus5 available on GooglePlay" -a \
                /tmp/che.html -- ${EMAIL} < /tmp/content.txt

Setting the bash script as an executable for crontab: chmod +x

Good guide on crontab:
Use the following line for crontab to run the script every 5 minutes:

*/5 * * * * /home/mydirectory/

DataTables is a jQuery Javascript library plug-in to convert static html tables (including dynamically generated) into dynamically viewable and sortable tables. The advantage to using this plug-in is that the end user is given control over viewing and sorting the information presented in a table.

There is one feature which I wanted to implement but had a difficult time finding the exact solution. I wanted to set the default sorting for one specific column. The function, aaSorting, handles this but requires the use of an integer indicating the location of the column header. The downside to using fixed integers is that if the column headers change at some future point, the sorting could break then requiring the fixed integer to be updated. Why can’t this be dynamic and rely on a class name for the column header, instead of a fixed integer?

The author of DataTables addresses this question and feature in a post, that also includes a work-around to accomplish setting the default sorting by using the table column header (th) class value. The author also posts a live example.

In the end, the workaround code is:

    "aaSorting": [[ $('#example thead th.default_sort').index('#example thead th'), 'asc' ]]

The below charts are for the top 15 U.S. business school MBA programs based on an average of rankings from four sources: US News 2012, BusinessWeek 2012, Financial Times Global 2013, and the Economist 2012
For a more consolidated version, view the top 6 U.S. business school MBA programs.

Bar Chart showing the average ranking from four sources: US News 2012, BusinessWeek 2012, Financial Times Global 2013, and the Economist 2012 (lower is better)

Bar Chart showing the total expense for 2 years of tuition (in USD) using 2012-2013 data

Bar Chart showing the average 1st year salary (in USD) without bonus using 2012-2013 data

Bar Chart showing the revenue/expenses ratio. Revenue being defined as the 1st year salary without bonus. Expenses being defined as two years of tuition costs. The break-even point is 1. Values greater than 1 break-even after the first year of revenue. (Larger value is better)

The below charts are for the top 6 U.S. business school MBA programs based on an average of rankings from four sources: US News 2012, BusinessWeek 2012, Financial Times Global 2013, and the Economist 2012
Top 15 U.S. business school data

Bar Chart showing the average ranking from four sources: US News 2012, BusinessWeek 2012, Financial Times Global 2013, and the Economist 2012 (lower is better)

Bar Chart showing the total expense for 2 years of tuition (in USD) using 2012-2013 data

Bar Chart showing the average 1st year salary (in USD) using 2012-2013 data

Bar Chart showing the revenue/expenses ratio. Revenue being defined as the 1st year salary without bonus. Expenses being defined as two years of tuition costs. The break-even point is 1. Values greater than 1 break-even after the first year of revenue. (Larger value is better)

Tagged with: , , , ,

At some point in the last few months changes were made to either or both the ADT plugin for Eclipse and Google Play that rendered my most recent Android application updates “incompatible” with tablets.
After doing a bit of research, I was able to locate 3 items that after adjusting in my applications resolved the new “compatibility” issue with my applications distributed through Google Play on tablets.

1) I added the below lines to the AndroidManifest.xml file. The lines explicitly declare support for all screen sizes, especially the xlargeScreens for tablets.


2) Adjusted the section of the AndroidManifest.xml file from:

<uses-sdk android:minSdkVersion="4" targetSdkVersion="8"/>


<uses-sdk android:minSdkVersion="4"/>

3) Ensured that there was a drawable-xhdpi folder under the res directory that at least had the Android application icon.

After making these three changes, Google Play restored “compatible” for my applications with tablets.

The most frustrating part of the experience is that Google Play gives no indication during the application upload and publishing process that it will prevent tablets from using the application.Due to this lack of transparency, the issue can’t be identified until the application update has already been published and made public for users on tablets.

Update on June 3rd 2013: For the next update to the applications, I might try adding back in the targetSdkVersion to see whether or not that impacts tablet support in Google Play. As mentioned in the comment below, and I do agree, that removing/adding targetSdkVersion shouldn’t impact tablet support.

Tagged with: , , , , , ,

After previously posting a summary of my research around the best options for a well priced high performance and secure 256GB SSD drive, I attempted to gather as much detail as possible around the encryption provided on the OCZ Vector 256GB drive.

The official documentation for the OCZ Vector 256GB drive used to state, “Data Encryption: 256-bit AES-compliant, ATA Security Mode Features”. This information has been removed within the last week and I inquired about this below. In addition, the previous official documentation was vague and didn’t provide much technical detail. With the help of Dr Charl Botha and his blog, SSDs with usable built-in hardware-based full disk encryption, I was able to hold a very technical conversation with an OCZ Technology Support representative, Eric Von Stwolinski, regarding the AES encryption implementation on the OCZ Vector 256GB drive.The full conversation is below.

In the end I’ve found the lack of technical details and current conflicting information to be confusing. The overall experience has been slightly frustrating as no definite conclusion can be drawn.
If you have any feedback or ideas, feel free to post them in the comments.

Apr 23rd, my original question:
“1. Does the encrypt its AES keys with the ATA password?
2. Is the ATA password stored as a non-reversible hash on the firmware?”

Apr 23rd, Eric Von Stwolinski:
“The drive does support 256-bit AES. It is enabled by setting an ATA level password.
Once a password is set the drive is completely inaccessible until the password is provided. There is no master password for the drive or any way to access the drive other than to supply the correct password once it is enabled.”

Apr 24th, my reply:
“Is the AES key, that is used to encrypt the data on the drive, encrypted using the ATA password?”

Apr 24th, Eric Von Stwolinski:
“It uses AES encryption, but this feature is enabled and used by setting the ATA password on the drive.

If no ATA password is on the drive then the AES encryption is inactive. Only when an ATA password is applied to the drive is the AES encryption used.”

Apr 24th, my reply:
“Unfortunately, your last response doesn’t directly answer my question. I’ll repeat and rephrase my question. Thanks for your assistance in clarifying this important point for me.
Repeat: ‘Is the AES key, that is used to encrypt the data on the drive, encrypted using the ATA password?’
Rephrase: I understand that the AES encryption is only activated once the ATA password has been applied. My question is about how the ATA password is applied in relation to specifically the AES encryption key. AES encryption requires a key to encrypt and decrypt the data. The handling of this AES key is the focus of my question. Is the AES key itself encrypted using the ATA password?”

Apr 25th, Eric Von Stwolinski:
“The ATA password is the AES key.
The key for AES is enabled, disabled, or set using the ATA level password function. If an ATA password is set then AES is enabled, and the key to unlock the drive is the ATA password.
This means the ATA password must be provided every time you want to access the drive or if you want to change/disable the password.
Any attempt to access the drive without providing the ATA password would require getting through AES 256 bit, which isn’t possible to do with currently existing computers.”

Apr 29th, Eric Von Stwolinski: “The notes about AES support are just on the product page for the Vector drive:”

Apr 29th, my reply: “Hi Eric,
I have two follow-up questions. I do appreciate your assistance is sorting the AES encryption on the OCZ Vector SSD!
1) The product detail page you linked is very vague only saying, “256-bit AES-compliant, ATA Security Mode Features”.
Is there a more detailed public resource that provides the same level of detail you’ve provided regarding the AES encryption key and relation with the ATA password?
2) Regarding your previous comment two responses ago, “The ATA password is the AES key.” If this is true, then changing the ATA password will change the AES key, since they are the same. The current data on the OCZ Vector SSD, which was encrypted with the prior key, can’t be decrypted with the new/changed key, rending the current data unreadable? To summarize, you’re saying if the ATA password is changed, the current data on the OCZ Vector SSD is lost?”

Apr 29th, Eric Von Stwolinski: “We have no further documentation about the drive’s security features. This is only a consumer grade drive. Our enterprise grade drives have much more documentation available. If you are looking for a high security drive I strongly recommend looking into an enterprise grade drive.
If you wish to destroy all information on the drive forever, that can be done using the secure erase function in the toolbox utility. This is the only way to reset and wipe the drive. A secure erased drive is not recoverable by any means.”

Apr 29th, my reply: “Hi Eric,
Thanks for clarifying the documentation. I’m still not clear on my previous follow-up question. I’ll rephrase and attempt to clarify.
Is it true that changing the ATA password will render the data on the drive unreadable or inaccessible?
This is based on your comment that the “ATA password is the AES key”. If this is true, changing the ATA password would change the AES key. Without the previous AES key (previous ATA password) that the data was encrypted with, the drive can’t decrypted the stored data.
Can you confirm that changing the ATA password makes all data, prior to the ATA password change, on the drive unreadable or inaccessible?”

Apr 30th, Eric Von Stwolinski: “Changing or removing a password will not wipe out all information on the drive. That can only be done by a secure erase using the toolbox.
Forgetting a password will render the drive inaccessible and all data is lost, but merely changing or removing the password (which requires that the correct password is first supplied) will not destroy any information on the drive.”

May 2nd, my reply: “Hi Eric,
Thanks for all the clarification and assistance. I was reviewing all the information you’ve provided and when I accessed the link you gave to the OCZ Vector Specifications page,, I see the section that previously mentioned, “Data Encryption: 256-bit AES-compliant, ATA Security Mode Features” is no longer listed on the page. I can’t find any mention of AES-compliant or ATA Security Mode Features on the official page.
Can you confirm you aren’t able to view this on the official link you provided and help me understand why this was removed? Has official support for the 256-bit AES-compliant encryption and ATA security mode features been dropped?”

May 2nd, Eric Von Stwolinski: “I’m unsure why it was changed. It may have been changed due to firmware updates.
Please note that while the controller is capable of 256 AES, it is not intended to be a primary feature of the Vector drive.
Our enterprise grade drives are designed and built with a much wider range of features, including greatly increased write endurance as well as security and monitoring features.”

This is a high-level summary of the research I’ve done with the time I was able to dedicate. There is potential for much more in-depth research requiring a larger time commitment. Feel free to leave helpful comments!

There are three categories important to my research: Performance, Encryption and Price.


Tom’s Hardware provides a great, thorough and detailed listing of various performance benchmarks. The top five 256GB SSD drives as listed by Tom’s Hardware are:
1) OCZ Vector
2) Plextor M5 Pro
3) Samsung 840 Pro
4) OCZ Vertex 4
5) Corsair Neutron GTX


To help demystify the encryption features on these SSD drives I found two extremely helpful and detailed blog postings: SSDs with usable built-in hardware-based full disk encryption and Locking and unlocking an HDD with Dell Bios ATA password with hdparm.
The results from the SSDs with usable built-in hardware-based full disk encryption blog leave only one set of SSD drives that are known to have properly implemented hardware level encryption, the Intel series 320 and 520 SSD drives. Unfortunately, the Intel SSD drives aren’t on the top 5 list from Tom’s Hardware performance page and none of Tom’s top 5 SSD drives are mentioned in the usable built-in hardware-based full disk encryption blog posting positively. The OCZ series of drives are listed explaining that they more than likely do not properly implement the hardware disk-level encryption.
I reached out to OCZ support regarding the encryption provided on their OCZ Vector 256GB drive and with assistance from Dr Charl Botha, the author of the SSDs with usable encryption blog post, I was able to ask some great questions. As the full conversation turned out to be quite long, I’ve put it in a separate blog post titled: OCZ Vector 256GB SSD AES 256-bit Encryption Technical Details.
The end conclusion is that OCZ probably does properly implement the built-in hardware-based full disk encryption, but we don’t know for sure. The ideal confirmation would be provided by official documentation OCZ provides with the SSD drive.
If you want to reach out to OCZ, or any other manufacturer and confirm, feel free to leave an update in the comments!
The second helpful blog posting, Locking and unlocking an HDD with Dell Bios ATA password with hdparm, provides a great step-by-step walkthrough of the commands to execute using hdparm and the SSD to lock and unlock the drive.


There are two commonly trusted online retailers with a reputation for competitive pricing, Newegg and Amazon. To quickly research other competitors, a great tool is a simple search done through (Google’s product search), which usually shows a good listing of competitors.
The OCZ Vector is offered for $265 at Amazon and $270 at Newegg, but with a recent 15% off promotion at Newegg, the price comes to $230.
The Samsung 840 Pro is offered for $220 at Newegg and $230 at Amazon.


The best 256GB SSD drive currently on the market that possibly offers proper hardware disk-level encryption and has the most competitive price is either the OCZ Vector with promotion discount for $230 at Newegg or the Samsung 840 Pro for $220 at Newegg.

After every kernel upgrade on Ubuntu, the prior kernel image and header files remain. Personally, the most common usage for these prior kernel files is to revert to a prior kernel version when I’ve experienced issues with the current one.

I found over 2 GBs of older kernel images and header files taking up space, which I didn’t plan on using. For those who want to clean up prior (old) versions and free up space on their drive, below are some great and simple steps. These steps were provided by a great blog post here and I gleaned a more detailed approach from a comment on the blog post.

To start, the following terminal command will gather a list of all the linux kernel header and images currently installed:

dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' >/tmp/file

To review the list of files, view the outputted information in /tmp/file using a text editor, such as:

vi /tmp/file

To send this list directly to the package manager to have all items on the list removed, use the following terminal command:

cat /tmp/file | xargs sudo apt-get -y purge

I was able to free up a few GBs of space from prior kernel images back to 2.6.*.

Note: Once the kernel image and header files are removed, the system can not revert to that prior kernel. Only remove kernel images and header files for kernels which will not be required. Leaving 2-3 prior versions of a kernel could be considered a safe backup practice.

I accidentally wiped the data on my Android device losing all my Google 2-factor authentication tokens.

Using the Google Authenticator plugin for WordPress by Henrik Schack meant I was now unable to login to my blog. In order to login to my blog, I would need to either erase the plugin removing the 2-factor authentication (rm wp-content/plugins/google-authenticator/), or obtain the secret key and add the secret key back to the Google Authenticator application on my Android device. Rather than erase a plugin I wanted to use, I started searching for the key. Unfortunately, I wasn’t able to easily find this information.

I manually started searching through the WordPress database and found the secret key under the usermeta table in the field called googleauthenticator_secret. Providing this key to the Google Authenticator application allowed the application to start generating the login tokens again and allowed me to log back into my blog!

Hope this helps somebody else in the same situation!


On Friday, November 29th, Dell made the Ultrabook XPS 13 Developer Edition available for order initially at $1549 and then later correct the price to $1449, bringing the price below the comparable Windows 8 version.
At first impression, this product appears amazing. The open sourced drivers, software available on github page, cloud focused tools to quickly pull down the correct developer tool, all the incorporated community feedback/support, and the focused hardware specifications.
Over the years, the main reason myself and others I know have purchased Dell products are two-fold. One, Dell has built a solid reputation for producing quality products and providing decent warranties. Two, Dell has historically provided the customer many options for customizing the hardware. This offering comes with reason number one, but does not come with reason number two. I will focus this post on reason number two.

Dell claims to offer a “Developer Edition” Ultrabook. There is a fine line between a marketing tag line and the real title/focus/goal of a product. If a product is marketed for developers, the developer should be able to customize the hardware. In the course of developing software, every developer runs into hardware limitations, whether through an accidental infinite loop or intentionally through stretching a system’s capacity. Developers write software confined inside the limitations of hardware. To produce a product aimed at developers without allowing hardware customization is essentially one way to tie the developer’s hands.

There are certain hardware configurations that can be customized without impacting the drivers or software provided on the Ultrabook. At minimum, both the amount of RAM and the SSD capacity can be adjusted without requiring different software drivers. In some cases, the display, such as offering a 1080p option in addition to the current 720p, can be adjusted without requiring different software drivers.


Dell should rely on the roots they built off of that especially included the ability for consumers to affordable customized product hardware. Dell should at least provide an alternative configuration with, at minimum, more RAM, a smaller SSD or a higher resolution display. I’m sure many more developers would be willing to follow through and purchase this product if Dell allowed the developers the direct ability to select hardware components.


I’ve only come across two decent competitors in this niche market:
ZaReason UltraLap 430
Apple MacBook Air

Dell Ultrabook XPS 13 Developer Edition Hardware Specifications

As of Nov 29th 2012 – $1449
Initial Release Hardware Details:
3rd Generation Intel® Core™ i7-3517U (4M Cache, up to 3.0 GHz)
UBUNTU Linux 12.04
13.3″ HD 720p
8GB2 DDR3 SDRAM at 1600MHz
256GB Solid State Drive
Intel HD 4000
2.99 lbs


Dell Ultrabook XPS 13 Laptop, Developer Edition – Ubuntu 12.04 LTS
Comparable Ultrabook through ZaReason
Comparable MacBook Air
Comments on Barton George’s Blog
Sputnik Github Page
Ubuntu Image and PPA information
Dell Forum for Project Sputnik

Tagged with: , , , , , , , , , , , ,

Having spent a substantial amount of time (~6 years+) running outdoors, there were always three main metrics that I wanted to know during and after a work out.
1) Overall time
2) Overall distance
3) Pace (Example, 1 mile in 7 minutes)
To store this information I had grown up using, a journal or log, sometimes referred to as a runner’s log. A runner’s log would contain these metrics of the runs, plus any additional information, such as the route taken, weather, which pair of shoes, etc. Unfortunately, some of these important metrics were not precise and had to be estimated, such as distance, pace, and exact route.

Previous Solution
For the first metric, overall time, I would use a watch. Subsequently, I always wore a watch in order to keep of the time that I spent on any given run. In addition, I would purchase watches that stored run (lap) times to help keep track of the time spent on prior runs.
For the second metric, overall distance, the general approach was a rough guess. The third metric, pace, is a result of the first two metrics, time and distance. When running on a track or treadmill, the distance and pace was easily available, but many people prefer to run outdoors. A few approaches I have used and seen others use for gauging distance include: running the same route multiple times, recording the times, and backing out the pace from experience (such as the distance between sidewalk squares from a stride), using the odometer in a car while driving over the route, and using online mapping web sites. These methods vary in accuracy, time, and effort required, but the majority are less than convenient.

New Solution
Now, using any smartphone and a free application from Nike called Nike+ Running, the three metrics I have always wanted are easily available, in real-time. During a run, the Nike+ application uses the smartphone’s built-in GPS radio and Google Maps to precisely monitor and report time, distance, and pace. Not only is the application able to report these three important metrics at the end of a run, but the application also reports these three metrics throughout the run providing instantaneous feedback on performance.
As if having access to these three important metrics during and after the run wasn’t enough, Nike also allows this information to be privately or publicly uploaded to their server for retrieval at any future point in time. In addition, the Nike+ application provides the majority of a runner’s log, including details of the route ran, weather conditions, route surface, and the specific pair of shoes used. As a side note on the shoes, the Nike+ application allows a pair of shoes to be added and tagged for each run, keeping track of shoe mileage for more precise shoe replacement timing. The application also provides summarized accomplishments and insights based on the performance of all saved runs such as, “This was your farthest run at 4.29 miles” or “This was your fastest 5K at 27:02”.

The Nike+ application also has an equally feature-filled and useful accompanying web site. On the web site, run details can be viewed on a larger screen allowing for a more detailed analysis of pace, distance and elevation. There is also an interesting feature showing predetermined Nike+ approved running routes in the user’s neighborhood allowing the user to indirectly compete against other runners.

I have come across two downsides. First, I found that the Nike+ web site used to allow users to challenge each other, but Nike states this feature has been disabled while they work to improve it.
Second, the Nike+ application appears to have a built-in music player but essentially little to no documentation or guidance on how to use the player. In the meantime, I’ve been using the offline mode in Google Music which has worked plenty well.

For a free application, Nike+ provides incredible features and services, both in the Android application and on the web site. For any runner, I highly recommend at least using this application at least once as the provided knowledge is powerful and addictive!

Nike+ Running Android application
Nike+ Web site
Temporary Disabled Challenge feature on the Nike+ Web Site

Tagged with: , , , , , , , , , , , , , , , , ,

While attempting to paste java code from an Android application into a WordPress blog, I discovered WordPress ignores all the white spaces and allows the text to run over the margins. Adding the <pre></pre> html tags respects the white spaces but does not respect the margins.

In the end, I added a few lines of code to the theme’s style.css file that force the <pre></pre> to respect whitespace and the theme margins. Below is an example of the adjusted pre tags and also contains the actual modifications to the style.css file.

Blog by Greg Rickaby – Adjust WordPress to support proper code display

/* Code
------------------------------------------------------------ */

pre {
	background-color: #dbdbdb;
	font-family: "Consolas", "Bitstream Vera Sans Mono", "Courier New", Courier, monospace !important;
	font-size: 13px !important;
	font-weight: normal !important;
	font-style: normal !important;
	text-align: left !important;
	line-height: 20px !important;
	border-left: 3px solid #75DB75;
	padding: 10px;
	margin: 0 0 15px 0;
	overflow: auto;

pre::selection {
	background-color: #3399ff;

code {
	background-color: #FFFF9E;


There is a great piece of java code posted by koush on Github’s gist showing how to interface with the official Android Twitter application. The key was not only knowing the package name, but knowing the right extra information to add to the intent sent to the Twitter application.
The below code and two links below illustrate a simple method of firing off an intent loaded with the proper profile activity class name and screen name to pull up the user’s page in the official Twitter application. The second link shows how I added this to a custom Whats New dialog used in my applications.


koush gist on github:
my gist on github with whats new dialog function:

Copy of my gist on github

    private void whatsNewDialog() {
        //Dialog dialog = new Dialog(main.this);
        final Dialog dialog = new Dialog(this,;
        //dialog.setTitle(getString(R.string.app_name) + " v" + currentAppVersion);
        //set up Title
        TextView textWhatsNewTitle = (TextView) dialog.findViewById(;
        textWhatsNewTitle.setText(getString(R.string.mainTitle) + " v" + currentAppVersion);
        //set up text content
        TextView textWhatsNewContent = (TextView) dialog.findViewById(;
        //set up image view
        ImageView img = (ImageView) dialog.findViewById(;
        //set up Okay button
        Button btnOkay = (Button) dialog.findViewById(;
        btnOkay.setOnClickListener(new OnClickListener() {
            public void onClick(View v) {
                editor.putLong(PREF_WHATS_NEW_LAST_VERSION, currentAppVersionCode);
        //check for Twitter application
        boolean twitterInstalled = false;;
        try {
            PackageManager packman = getPackageManager();
            packman.getPackageInfo("", 0);
            twitterInstalled = true;
        } catch (Exception ex) {
            twitterInstalled = false;
        //set up Twitter button
        Button btnFollow = (Button) dialog.findViewById(;
        btnFollow.setOnClickListener(new OnClickListener() {
            public void onClick(View v) {
                // this is the intent you actually want.
                // grabbed this by hooking a debugger up to twitter and debugging into android framework source.
                // this let me inspect the contents of the intent.
                Intent i = new Intent();
                i.setClassName("", "");
                i.putExtra("screen_name", "joeykrim");
                try {
                catch (Exception ex) {
                    // uh something failed
        //Log.d(LOG_TAG, "twiterinstalled: " + twitterInstalled);
        if (twitterInstalled) btnFollow.setVisibility(VISIBILITY_VISIBLE);
        //now that the dialog is set up, it's time to show it;
Tagged with: , , , , , , , ,

Recently, I had two Seagate 1TB hard drives fail. One was 4 years old and the other was the 3 month old warranty replacement. Luckily, I had two of these hard drives setup in a RAID 1 configuration. Since I have an Intel chipset, I also have Intel’s support for software RAID. After having configured this in the BIOS, I setup the Operating System side of the RAID configuration in Ubuntu.

Second HDD Failure
After connecting the second refurbished HDD from Seagate and while booting, I entered the RAID controller setup. In this screen, I saw that the new hard drive was automatically marked as ready to be used to rebuild the RAID configuration. After confirming yes, I booted into Ubuntu 11.10. The software RAID automatically started rebuilding itself. There are two main commands to monitor and confirm completion of the rebuild status. The first, gives a detailed analysis of the current RAID setup: sudo dmraid -s -v. The second, gives a detailed progress report on the re-build status: sudo dmsetup status.

The two 1TB hard drives usually take a few hours to completely synchronize, 2.5 hrs in my case. The process is very simple and easy.

Main Commands Used:
sudo dmsetup status (display progress)
sudo dmraid -s -v (display overall raid status, mirror and ok or nosync)

*Update: A comment left by Arie Skliarouk says that a rebuild command needs to be issued, although I recall setting the rebuild status in the BIOS, this might be needed for some users. Example command: sudo dmraid -R isw_ebdfjfbgdj

Properly Working dmraid Status:

[email protected]:~$ sudo dmraid -s -v
*** Group superset isw_ebdfjfbgdj
--> Active Subset
name   : isw_ebdfjfbgdj_JRaid1TB
size   : 1953519872
stride : 128
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0

Needing to be Rebuilt dmraid Status:

[email protected]:~$ sudo dmraid -s -v
*** Group superset isw_ebdfjfbgdj
--> Active Subset
name   : isw_ebdfjfbgdj_JRaid1TB
size   : 1953519872
stride : 128
type   : mirror
status : nosync
subsets: 0
devs   : 2
spares : 0

Rebuild Progress Status (569/14905):

[email protected]:~$ sudo dmsetup status
isw_ebdfjfbgdj_JRaid1TB2: 0 102400000 linear 
isw_ebdfjfbgdj_JRaid1TB1: 0 1851117568 linear 
isw_ebdfjfbgdj_JRaid1TB: 0 1953519880 mirror 2 8:48 8:32 569/14905 1 AA 1 core

Helpful Links

Hard Drive Details:
Both 7200rpm 1TB Seagate drives
ST31000333AS (4 yrs old – Part of the Seagate Barracuda 7200.11 family)
ST31000524AS (2nd Refurbished Replacement)

Tagged with: , , , , , , , , , , , , , , , , , , , , , , ,

Twitter Bootstrap Navbar Custom Coloring

The Twitter opensource bootstrap tool set is amazing. Tools to customize and examples on usage are all posted on github along with all the source code.
For the specific task at hand, I wanted to change the default white/black color of the navbar. I spent more time than I expected searching the internet for easy to use and helpful examples.
In the end, I found two great tools that allowed color changes to be made in real-time and then packaged up and downloaded.
The steps will be in two main parts, picking colors and packaging.

Selecting Colors
The best tool I came across for picking a color and displaying the various shades, in order to properly shade the navbar, is at w3schools and called color picker. After selecting a color, I experimented with various darker and lighter shades.
The best tool I came across for applying colors and seeing the results on the Twitter Bootstrap navbar in real-time, is hosted on github, created by decioferreira and called bootstrap-generator. Being able to see the color changes in real-time was extremely helpful and saved a lot of time.

Once the proper colors have been selected, Twitter provides a great customization tool that correctly packages up the changes. The customization tool is available on Github under Twitter’s bootstrap customization page.
After loading the color values into the navbar section of the customization page, at the bottom, press Customize and Download. I extracted the bootstrap.css and bootstrap.min.css from the .zip file and loaded these into my project.

Hope those tools and steps save some time and help in creating a custom color scheme for the Twitter bootstrap navbar!

Will working on decioferreira’s real-time navbar tool, I found one small bug in the code incorrectly linking navbarLinkColorHover to navbarLinkBackgroundHover. I fixed the bug and sent a pull request. If his page hasn’t been updated yet, I still have the forked version with the bug fixed and working on my github – bootstrap generator.

Twitter Bootstrap:
w3schools color picker:
decioferreira bootstrap-generator:
Twitter bootstrap customization page:

Tagged with: , , , , , , ,

Shortly after posting Android applications in the Android Market, Play Store, I noticed that due to the diversity (fragmentation?) of hardware and software combinations there were many errors users were experiencing that I wasn’t able to duplicate. In addition, I also quickly noticed that when logs were submitted to the Google Play store and appeared in my developer account, they didn’t always contain helpful details.

Bugsense Experience Highlights
To easily and quickly remedy this issue, I started using Bugsense around November 2011. In order to add Bugsense to the application, the developer literally only has to add one .jar file and one line of code. The free account has been sufficient as historically I’ve had under 500 bugs a month but many application downloads in a month. I have never had any issues with downtime. Emails are sent as soon as the application force closes, have accidentally tested this during development one too many times. The ability to track bugs all in one simple to use dashboard interface has greatly eased the overall application maintenance task.

Upgrading to Paid account
Reason why I’m over the 500 bug reports a month limit and needing to purchase the $20 a month plan is recently, somebody posted an older version of my paid Build Prop Editor application on a Russian forum for free. Now, the one bug in the app is reported 800-900 times a month and since they’re using a “pirated” version, they aren’t receiving updates. This one bug puts me over the Bugsense quota limit for the free account. If I want to stay current on legitmate bugs during the second half of the month, I need to upgrade my account.

The alternatives are to use the open source library ACRA that sends the bugs to a Google document or Crittercism that limits usage based on the number of devices loading the app. The Google document is much more difficult to manage than the Bugsense Dashboard. For Crittercism, since I have popular apps, I am over the Crittercism free limit, even though the bug count would be low.

The Pros far outweigh the Cons. By using Bugsense, I’m able to spend my time developing and fixing bugs in my applications, rather than developing a proper system to collect and organize the bugs. Thanks Bugsense!

Below is my personal list containing the Pros and Cons to Bugsense.
**Reliable (I haven’t experienced downtime in the last year)
**Fast (Bug reports are emailed almost instantly)
**Quality customer service (Over quota and emailed sales. The CEO answered and has been very friendly and helpful)
**Easy integration (Add one .jar and one line of code)
**Also supports other systems, such as iOS, HTML5, etc.

**Previously when product first launch, bug quota was unlimited but now free account is limited to 500 bugs per month (No such thing as a free lunch…)
**No plan pricing less than $20 a month
**No way to stop old bugs from consuming the free account monthly quota
**No way to stop Bugsense library introduced bugs from consuming the free account monthly quota

Bugsense – Pricing
Bugsense – Android Instructions

An Android library Bugsense developer reached out to me regarding this blog post and requested a list of the Bugsense library bugs being reported in my project. I sent over 6-7 of the most recent and last week sent over another 4-5 new bugs. Hope they are getting patched!

Disclaimer: I’ve received 50% off for a one year subscription by writing this blog post. I was granted complete control over the content I’ve posted.

Additional potential alternatives: Airbreak, Hockey App, Apphance, TestFlight, Usermetrix, Crashlytics, and Crittercism

Tagged with: , , , , , , , , , , , , , , , ,

Took me a few minutes to gather the IRC server list for freenode, add the standard port and resolve the hostname to IP.
I have provided all this information below in the proper format for eggdrop bots. This information can also be helpful when configuring znc, psybnc, muh, energymech, infobot, or any other type of IRC client and bot which relies on a saved server list.
Hope this helps somebody else save a few minutes!

List source: . First section contains US servers, 2nd section contains Europe servers and 3rd contains the only Asia server.

Tagged with: , , , , , , , , , , , , ,

Recently worked on implementing listview inside of a viewpager. This was a bit of a challenge but one little bug which turned into another bug.

The first bug was the background always turned black while scrolling which with black text on a white background washed out the text.

After resolving this by setting the background color, the 2nd bug which was very persistent was constant garbage collection while scrolling through the listview. Even if there were only 30-40 items, the garbage collection in logcat seemed awfully high.

After researching these issues I came across the key java commands to help clear them up. There were many helpful posts on, but the most helpful I link below.

The Java commands essentially set the listview background to white, matching the rest of the viewpager. The second Java command sets the Cache Color Hint which tells the listview which color to use while scrolling and animating the fade in/out at the top/bottom of the listview.

The commands are:
listView = (ListView) this.findViewById(;


Tagged with: , , , , , , , , , , , , , ,

Having an SSD (solid state disk) store the boot operating system has many advantages and a few disadvantages. One of the main disadvantages being the average SSD life span is a bit more sensitive to read/writes when compared to traditional hard disk drives. This makes moving the /home partition (user data) on the SSD to a traditional hard disk very advantageous. Also, having traditional hard disk drives set up using RAID 1 (mirroring) helps ensure against disk failure.

Having already created the software “FAKERAID” partition using dmraid this guide will cover how to map the partition over as the main /home directory. Following is a simple set of commands to run:
cd /
sudo -s -H
mv /home /home2
mkdir /home
sudo gedit /etc/fstab
(at the bottom add)
/dev/mapper/isw_cbgddidiaj_RAID2p2 /home ext4 defaults 0 0
mount -a
*there are multiple ways to move the data from the old home2 directory to the new home directory*
cp -R /home2/* /home/
rm -r /home2

Most of these steps are taken and adapted from this guide: Ubuntu Wiki Raid

Tagged with: , , , , , , , , , , , , , , , ,

Running an SSD for the main boot partition is quite convenient for any OS, including Ubuntu. However having Google Chrome, or any browser, store its cache on the SSD is not the ideal scenario.
Under Ubuntu Natty 11.04 moving Google Chrome’s cache to RAM is fairly simple and only takes a few commands. The advantages to storing the web browser cache in RAM are: quicker reads/writes than on disk drives, no wear and tear on disk drives and the cache will be erased on reboot. The disadvantages are: the cache will be erased on reboot and will consume RAM, which can be limited on some systems.

1) Decide where to move the Chrome cache location to. I’ve picked the following location: /tmp/chrome.
This directory, /tmp/chrome will need to be created on boot and properly setup.
On Ubuntu 11.04 and probably older versions, this can be simply done in the /etc/rc.local file as follows:
sudo gedit /etc/rc.local
Add the following lines:
mkdir /tmp/chrome
mount -t tmpfs -o size=1024M,mode=0744 tmpfs /tmp/chrome/
chmod 777 /tmp/chrome/ -R

There are two ways to accomplish the next and last step. One is to create a symlink between the default google cache directory and the new temporary cache directory in RAM. The second is to add a switch to the google chrome command line telling each instance of the application to use our newly created cache directory in RAM.
1) rm -rf ~/.cache/google-chrome
ln -s /tmp/chrome/ ~/.cache/google-chrome
2) Change the default here: sudo gedit /usr/local/share/applications/google-chrome.desktop
Replace the line:
#Exec=/opt/google/chrome/google-chrome %U
Exec=/opt/google/chrome/google-chrome –disk-cache-dir=”/tmp/chrome/”

The only adjustment some might want to make will be the size of the tmpfs partition created in RAM. I set the size to 1024MB as I don’t ever want to have to worry about it or adjust it. For systems with a lot of RAM the above size should not be an issue.

Used the following main sources:
Firefox & Chrome Cache on RAM Drive -Fedora / Ubuntu
How To Change Google Chrome’s Cache Location And Size

Tagged with: , , , , , , , , , , , , , ,