Sunday, December 13, 2015

Fixing iOS 9 (9.1,9.2) Blank uiWebView Page in a Corona App

As expected, an update has come along that breaks something in your Corona app... this particular issue has to do with changes in iOS 9 uiWebView security. If you are not connecting your webview via (valid) https, you will now get a blank  page in your webview. This can be resolved by customizing App Transport Security settings in info.plist (for native apps). This file won't exist in your Corona project, but it is supported via the build.settings file. There is a Corona forum post on how to update build.settings to generate the necessary info.plist settings in your build here. Even better, there is a Corona blog entry with more insight here. Finally, if you want to truly understand your settings, reference the Apple docs on this here.

In the end, your build.settings will need an exception added like so:

settings = { iphone = { plist = { NSAppTransportSecurity = { NSExceptionDomains = { ["example.com"] = { NSIncludesSubdomains = true, NSThirdPartyExceptionAllowsInsecureHTTPLoads = true }, }, }, }, }, }


If you control the domain, you should use NSExceptionAllowsInsecureHTTPLoads rather than NSThirdPartyExceptionAllowsInsecureHTTPLoads. Only use NSIncludesSubdomains if you need to allow multiple subdomains.

Of course, the ideal solution is to ensure any webview is connecting via a valid https connection... but that is not always possible.

Wednesday, November 25, 2015

Ebates First 30 Days Review

Of course I had heard of Ebates, but I had never actually tried Ebates until 30 days ago as per the recommendation of a friend. This is my 30 day review.

The main reason I had never tried Ebates before is because the idea of coupon sites, comparison sites, and deal sites has become quite lackluster to me over the years. I thought Ebates was the same as the rest. I thought I could just assume Ebates was another comparison engine, posting coupons and deals just like the rest. Boy was I wrong...

My first experience with Ebates was great. I signed up, saw a promo for a $10 gift card after first $25 purchase and proceeded to order a laptop battery from Newegg. I would have ordered the battery from Newegg anyway, but now I was getting cashback and a $10 WalMart gift card just for clicking through from Ebates first. Sure enough, the cashback showed up in my account and my gift card was processed... not immediately of course, but ultimately it happened without any strings attached and without any intervention required. I have a $10 WalMart gift card on the way to me right now.

The first Ebates experience was good enough that I installed their chrome extension. Since then, I have made a couple more online purchases and received cashback. My cashback balance reached $5.33 in the first 30 days. I thought there was surely some strict threshold required before they would cut me a check, but I have a $5.33 check on the way as of today. Again, no strings attached and no intervention required.

Now that I've had a successful first month and Ebates has proven to be non-disruptive and practically automatic, I will certainly continue to use them. I have actually started looking for coupons again since Ebates makes that super easy as well. Further, I am considering some larger purchases I would have not otherwise considered due to 6% cashback and readily-available coupons via Ebates.

I highly recommend getting on board with Ebates. It is free and supports Facebook and Google login to make it even easier. Login and in 1-2 clicks you'll be on your favorite ecommerce store as always, but this time you'll be earning no-strings-attached cashback via Ebates.

If you're ready to sign up for Ebates now, click here to get started.

Tuesday, July 28, 2015

API for Retrieving Website Favicon with Fallback

I have worked on a number of projects where pulling in favicons from external websites needed to be done. This would typically be approached by checking for existence of a /favicon.ico in the root of the website or something along those lines. This isn't fool proof by any means though since the favicon can be named differently or located in a different path. It also takes a lot of unnecessary overhead to check for existence of a favicon before serving up a fallback.

Fortunately, there is a "hidden" Google API of sorts that handles this for you. It is used to pull in favicons for websites attached to your Google+ profile page. It works as follows:

http://s2.googleusercontent.com/s2/favicons?domain=mrrobwad.blogspot.com&alt=p

It turns out the fallback can be one of a few things. So far, I've discovered that setting the 'alt' parameter to 's' yields a blank page icon and 'p' yields a page template icon when no favicon is available. Setting to any other single letter yields a browser/world icon. There could be more, but I have only checked for single letters so far.

Enjoy!

Friday, July 3, 2015

How To: Minify JS and CSS with ColdFusion

If you're reading this post, I can assume two things.
  1. You're running ColdFusion and need to minify JS and/or CSS files.
  2. You've researched other solutions such as complex regex routines or CFCs and are not satisfied with those solutions.
Regarding #2, modern minification is complex and tedious. If you've tried writing your own routines, you know what I mean. Not to mention, effective minification consists of more than just code formatting. So what's the answer? Well, there is already a clear and proven minification winner: YUI Compressor. This utility is widely used, reliable, and efficient. It also includes useful options to give you some level of control as well as insight.

Now that we've picked our minification engine, how do we make it work with ColdFusion?

First you need to get YUI Compressor installed on your server and YUI Compressor relies on Rhino, so we need to get that installed as well. All of this relies on you having Java installed, but since you are running ColdFusion I am assuming you do!


DOWNLOAD RHINO (recommended release at time of this post is 1.7R5)
  • Install to directory of choice, i.e. C:\apps\rhino\rhino1_7R5
  • Add to Windows CLASSPATH
    • System Properties > Advanced > Environment Variables > System Variables > CLASSPATH
      • If CLASSPATH system variable doesn't already exist, add it with value of "c:\apps\rhino\rhino1_7R5" or your chosen install directory.
      • If CLASSPATH system variable already exists, edit it by adding ";c:\apps\rhino\rhino1_7R5" or ";" followed by your chosen install directory.

DOWNLOAD YUI COMPRESSOR (recommend release at time of this post is 2.4.8)
  • Install to directory of choice, i.e. c:\apps\yuicompressor\yuicompressor-2.4.8.jar

Verify you can run YUI Compressor from command line
  • Open a command prompt in a test directory with a non-minified JS file present, i.e. test.js
  • Run command $ java -jar c:\apps\yuicompressor\yuicompressor-2.4.8.jar -v c:\test\test.js -o test.min.js
  • The "-v" argument causes output to include analysis of the JS with tips for improvements. Don't include this option if you are minifying CSS.
  • You should see test.min.js created in the current/test directory.

Run minification from within ColdFusion:
  • Use <cfexecute> to run YUI Compressor against file in need of minification.
  • minifyResult will contain the command prompt output.
  • minifyError will contain error information.
<cfexecute name="java" arguments="-jar c:\apps\yuicompressor\yuicompressor-2.4.8.jar c:\test\test.js -o test.min.js" variable="minifyResult" errorVariable="minifyError" timeout="10"/>

  • ColdFusion runs cfexecute from the ColdFusion bin, so this is where the minified file will be generated, so we need to move the generated file from the CF bin to the desired target directory.
  • The cf_bin_path variable value will likely be different for your install, so you need to update this variable to point to your actual CF bin directory.
<cfset cf_bin_path = "c:\ColdFusion\runtime\bin">
<cffile action = "move"
source = "#cf_bin_path#\test.min.js"
destination = "c:\target-directory\test.min.js">

Tuesday, June 16, 2015

Installing PhantomJS as a Windows Service (BONUS: Using PhantomJS with ColdFusion)

If you aren't familiar with PhantomJS, you probably wouldn't be looking at the blog post. Regardless, "PhantomJS is a headless WebKit scriptable with a JavaScript API. It has fast and native support for various web standards: DOM handling, CSS selector, JSON, Canvas, and SVG." available at http://phantomjs.org/.

PhantomJS is a very powerful and versatile tool. Once you get comfortable with PhantomJS you may want to install your PhantomJS application as a Windows service. There are a few good reasons for this, but one is running a persistent PhantomJS web server.

If you are interfacing with PhantomJS from another application, it is simple enough to create a new instance of PhantomJS each time you need to use it (i.e. via <cfexecute> in ColdFusion). The problem is that this will result in multiple PhantomJS processes, one for each instance, which leads to unnecessary system overhead. Alternatively, you may want to keep a persistent instance of PhantomJS running as a Windows service to handle multiple requests from your application.

For this to be practical, the persistent instance of PhantomJS has to be able to accept requests and provide responses to your application. In my case, I created a PhantomJS web server application to achieve this. It listens to HTTP requests on an obscure port only available internally. The PhantomJS website has some helpful examples for setting such a web server up. From there, my ColdFusion application simply needs to interface with my PhantomJS server via HTTP.

None of this is helpful unless the PhantomJS web server is up and running. By registering the PhantomJS application as a Windows service, we can ensure it is up and running at all times. We can also stop it, start it, and restart it as needed.

Before proceeding, note that this process requires changes to your registry. I hold no responsibility for changes you make. All changes are your own responsibility. I have successfully used this process on a both a Windows 7 64bit install and a Windows Server 2008 R2 64bit install.

Ensure you can run your PhantomJS application from command line without failure before turning it into a Windows service to begin with. For example, from command line: C:\apps\your-application\bin\phantomjs.exe phantom.your-script.js <port-number>. Where <port-number> represents the port number your phantom.your-script.js is setup to accept as an argument

Download necessary tools from: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9d467a69-57ff-4ae7-96ee-b18c4790cffd&displaylang=en
  • I suggest installing to C:\apps\rktools\
  • We will need these exe files:
    • C:\apps\rktools\srvany.exe
    • C:\apps\rktools\instsrv.exe

Run at CMD: C:\apps\rktools\INSTSRV.EXE "Phantom JS" C:\apps\rktools\SRVANY.EXE
  • "PhantomJS" is an arbitrary service name. Call it whatever you like. You can use spaces here, but maintain the quotes.

Open regedit.exe
  • Navigate to HKEY_LOCAL_MACHINE > SYSTEM > CurrentControlSet > Services > "Phantom JS" (your service name)
  • Right-click on the service name > New > Key
    • Name the key "Parameters"
  • Right-click on the Parameters key > New > String Value
    • Name the String value "Application"
    • Right-click "Application" > Modify
      • Set the value to: "C:\apps\your-application\bin\phantomjs.exe" phantom.your-script.js <port-number>
        • Make sure the path to the exe is quoted and the arguments are not. This should essentially be the same string you can pass to the CMD line to run your PhantomJS application but with quotes around the exe path. Again, <port-number> represents the port number your phantom.your-script.js is setup to accept as an argument
  • Right-click on the Parameters key > New > String Value
    • Name the String value "AppDirectory"
    • Right-click "AppDirectory" > Modify
      • Set the value to: C:\apps\your-application\bin\
        • This is the path to your application folder containing the PhantomJS exe and your PhantomJS script.

Your entries should look something like this:



Open services.msc and you should see your new windows service present. It is set to auto-start by default, but go ahead and start it for the first time to ensure it runs without failure.




Why "Googling Yourself" is not an excuse to ignore keyword rank tracking for SEO

I recently was asked my opinion on some SEO advice that had been shared with a client. The advice was essentially: Ignore keyword rank tracking because all Google results are personalized and you can't accurately track rankings by "Googling Yourself". Instead, look to organic traffic in Google Analytics as the sole indicator of SEO success.

First off, "Googling Yourself" seems like a bit of a misnomer for the act of checking Google ranks by Google searching a keyword phrase yourself. I think of Googling my name when I read that but it is evident what they mean when read in context. Ironically, one of the biggest issues I have with this has to do with context itself... read on to see what I mean.

There are a few things right about this:

  1. Keyword rank tracking can be considered unreliable, since search results are personalized.
  2. "Googling Yourself" can provide misleading information on keyword rankings since your results are personalized.
  3. Organic traffic in Analytics is a great indicator of overall SEO success.


Beyond those, this advice falls very short of effective SEO but there are ways to reliably check keyword rankings.

While "Googling Yourself" is not a reliable way to check rankings and Google results are indeed personalized, it doesn't mean keyword rankings don't matter for SEO. This is why we built our own in-house keyword rank checking tool that uses Google's Search API to determine ranks. The Search API doesn't factor any personalization in, but it also isn't an exact mirror of the Google.com search engine. Even so, we believe it to provide an authoritative baseline and the ranks returned are the most reliable ranks we can get.

Keyword rankings and activity are important.

Knowing how users found your website, specific what keyword terms were used, is very beneficial. This lets you know what is working, what isn't working, and sheds light on user perception and intent. In the past, Google Analytics reported on the keywords that people used to reach your website. This has since been all but stopped by Google (which is an entirely separate topic). However, you can get some referral keyword insight through Google Webmaster Tools. It isn't ideal, but it does offer some insight.

That said, knowing what keyword searches led to visits is a passive exercise. Keyword research and targeting is an active exercise. Why discount this just because it is difficult to asses rank? Why accept that overall organic traffic patterns are enough to indicate SEO success?

Being pro-active about SEO is better than watching organic traffic numbers passively.

It is true that the best overall indicator of how well you are doing on SEO is organic traffic. That is the end goal in most every case: more traffic from the search engines, period. The more organic traffic, the better you are doing, but how does this help you to be pro-active about SEO? It simply doesn't. It doesn't even help you to be reactive, because you have no insights into why your organic traffic is what it is. In this scenario, you simply implement some best practices (hopefully), hope for Google's favor, and anxiously wait for your organic traffic to increase. Google is smart enough that this is actually still pretty effective, but it is missing a huge piece of the equation... keyword ranking tracking.

Without keyword research and tracking, you can't know what keywords are competitive or low-hanging fruit, effective or ineffective. With keyword research and tracking, you can come up with a plan to target specific phrases. This is where you can get an edge on rankings and pro-actively increase your organic traffic. Google wants to provide the best results to its users, but it needs the help of websites to make that possible. If your website cooperates by offering up clean and clear indications of context, you are helping Google identify what your website is all about. You are also laser-focusing your content toward that context, which helps build authority in Google's eyes. You can partly accomplish this laser-focus through a natural but focused application of keyword phrases in all the right places (headings, lists, titles, descriptions, etc). Do not mistake this for keyword stuffing, but approach it as naturally editing your copy to provide consistency and focus.

In summary:

Search rankings should not be discounted or ignored. Checking ranks on specific search terms (via a tool like ours) isn't going to provide an accurate view of your overall organic search performance, only a sub-set of specific terms you determine worthy of tracking. However, those specific keyword rankings do effect the personalized search results everyone sees. If you rank high on highly relevant keyword searches, your chances of ranking high in related personalized results is much higher. Rankings also offer invaluable insight into how well Google associates you with what you believe the context of your website to be. If you are going after a specific niche, targeting specific keyword phrases and monitoring their performance is a huge part of reaching that niche. It all helps Google understand context (who you are) and authority (what you know or have to offer).

Client example:

Passive (Keyword targeting strategy NOT in place): 1035 organic visitors (12/2014)
Pro-Active (Keyword targeting strategy in place): 1723 organic visitors. (2/2015)
The December sample, shows how the website was doing on organic traffic without keyword research and targeting, but the February sample shows how the website was doing after keyword research and targeting was in place for only a few weeks.

There is too much potential benefit to ignore keyword rankings simply because Google personalizes results.

Monday, March 9, 2015

Connecting to a Netgear Nighthawk VPN with Android

According to Netgear, neither iOS or Android devices are supported by Netgear OpenVPN routers. This post is in reference to Android only, so don't make much of my comments if you are looking for iOS help. The lack of Netgear VPN support for Android is for two sensible reasons, though they aren't explained by Netgear.



  1. Netgear uses OpenVPN. Most built-in in VPN support on mobile devices is intended for use with PPTP VPNs, not OpenVPN. At first, you might be frustrated with Netgear for using a protocol without the widest support, but they have very good reason: security. PPTP is the least secure VPN protocol. Even Microsoft, who helped to develop PPTP, advises using a different VPN protocol for security reasons. BestVPN put together a great comparison of VPN protocols that reiterates the issues with PPTP and makes the advantages to OpenVPN clear. After reading this, you may be thanking Netgear for selecting OpenVPN and questioning their competitors for sticking with PPTP VPNs.
  2. Netgear's OpenVPN is TAP, not TUN. OpenVPN is operated in one of two main modes: TAP or TUN. The main difference being that TAP is layer 2 and works more like a switch or bridge and TUN is layer 3 and works on the network level to route packets on the VPN. Netgear isn't very obvious that it utilizes TAP and that is a problem for Android users as Android only supports TUN. Download most any OpenVPN client on Android and you'll be trying to make a TUN connection, which simply won't work with Netgear.
The simple solution, an Android app called OpenVPN Client. This is the only OpenVPN client on Android that currently supports TAP as far as I am aware. It also is a stable app with good developer support. Before you go and install this app and start trying to connect, let me save you a few headaches and save the developer from some support emails as well.

To connect your Android to 4.0+ device to your Netgear Nighthawk OpenVPN:
  1. Setup your VPN on your router as per steps 1-6 on these instructions from Netgear.
  2. Download the "Windows" setup zip from your router. 
  3. Extract the files to a dedicated/new folder on your Android where you can easily find them.
  4. Install the PAID version of the OpenVPN Client app (the free version does not support TAP)
  5. Tap the circular green "+" icon when you open the app. 

  6. Select Import VPN.
  7. Navigate to the folder where you extracted the setup files in step 3 and select the .opvn file.
  8. Open the new VPN connection in the app and select the Edit icon.
  9. Select Custom Options. 

  10. Add the option "route-gateway" with the value set to 192.168.1.1 (in most cases, your local network).
  11. Save changes and connect to the VPN.
OpenVPN Client is well worth the money and you'll be off to taking full advantage of your Netgear Nighthawk OpenVPN TAP VPN via your Android anywhere you may go.