The Eclectic Quill

Website of Joshua McGee

By

Just when you thought you had mastered cross-browser testing…

Just when you thought you had mastered cross-browser testing, emoji come along

Just when you thought you had mastered cross-browser testing, emoji come along

By

Customizing Lograge: Removing and adding fields

Lograge is a great Ruby gem for flattening your log files.  But as the author himself notes, "Lograge is opinionated, very opinionated."  Perhaps your opinions differ.  Mine did.

For instance, I don't need to log the controller, action, and format of requests.  I can infer those from the request path.  What I do care about is the time of the request, the User-Agent of the browser, and whether the request is by a search engine or not.

The following code mutes logging of controller, action, and format, and adds a timestamp, the User-Agent, and an attempt to identify search engines (via the Browser gem).  You end up with log file lines that look like this:

method=GET path=/ status=200 duration=200.29 view=53.22 db=40.01 time=2015-07-26 18:25:48 +0200 search_engine=false user_agent=Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0

method=GET path=/commander/mayael-the-anima status=200 duration=533.39 view=112.51 db=166.25 time=2015-07-26 18:27:40 +0200 search_engine=true user_agent=Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)

Use the code as-is, or as a jumping-off point for expressing your own opinions in your logs.

By

Facebook “Like” button is lower than Twitter and Google Plus buttons

By default, the Facebook "Like" button will be misaligned relative to other social buttons:

facebook-like-button-too-low

The fix is to add this to your CSS:

.fb_iframe_widget span {
  vertical-align: top !important;
}

… and you should be good to go:

facebook-like-button-just-right

By

Use nginx to block referrer spam from Semalt and other spammers

If you run a website, you have surely seen an increase in referrer spam lately.  In this type of spam, an organization will fake the Referer header of an HTTP request so that the webmaster will notice the spammer.  However, this throws off your site analytics and taxes your server, because each request sends the spammer a full copy of the page.

Semalt — a fake SEO tool that has hijacked a botnet for its spam — is currently the worst offender, but there are plenty of others.

If you use nginx to serve your webpages (or use it as a proxy to Apache), however, you can detect these referrers and block them across all your sites.

To begin, create a directory for global nginx rules:

sudo mkdir /etc/nginx/global
sudo nano /etc/nginx/global/referer-spam.conf

Paste the following into the editor, save, and exit:

##
# Referrer exclusions
##

if ($http_referer ~ "(semalt\.com|buttons-for-website\.com)") {
  set $prohibited "1";
}

if ($prohibited) {
  return 403;
}

The preceding configuration blocks semalt.com and buttons-for-website.com, two major offenders, but you can block whatever referrers you like.  The regular expression syntax is to take the hostname, escape all periods with a backslash, join them with a vertical bar character, and surround them with parentheses.

Then, in each site's configuration file, add the following line:

server {
  … all the stuff that's there already …

  include /etc/nginx/global/*;
}

Yes, it's kind of a pain that you have to repeat this for every site, but you can reuse this in the future by putting new global directives in the /etc/nginx/global/ directory.  If you have a site template file, I recommend adding the above line to it.

Test your configuration to ensure there are no typos (thanks to commenter mike for this step):

sudo nginx -t

Then reload your nginx config (thanks to commenter Marvin for modifying this step):

sudo service nginx reload

... and you should be good to go.

By

5 Sites I Wish I Had Built

As a web developer, sometimes you stumble upon sites that are so perfect in conception and implementation that you start kicking yourself for not having thought to build them yourself.  Here are my entries for sites I wish I had created.

camelcamelcamel

Site premise:  Let users add Amazon items to track.  Ask them what they are willing to pay.  Watch Amazon for price changes.  When the item is suitably discounted, send them an email.

It's super-easy-to-use.  The emails they send contain a graph of historical pricing data.  And the link to the item has an Amazon affiliate ID embedded, so that the creators of the site are making a slice of the profits.  Glorious.

ANAGRAMATRON

Site premise: Watch the Twitter API.  Look for tweets that are anagrams of each other.  Review them, and when you find a poignant pair, post them.

I love anagramsI love the Twitter API.  Why didn't I think to combine them?  Go check the site.  There are amazing pairs of tweets.

TinEye Multicolor Search Lab

Site premise: Scrape Creative Commons images.  Hash their colors, and store in a database.  Let users specify a color scheme, and show them the matching images.

This site is utterly brilliant.  Under the covers, I'm sure it works much like my site TileArray.  I love using it, and the love is only slightly reduced by the fact that I facepalm that it's not my code.

Six Minute Story

Site premise:  Post a writing prompt.  Give a writer six minutes to compose a new piece of flash fiction.  Prevent them from editing the story after the time elapses, and publish the results.

It's fun to write and fun to read others' works.  And after discovering and contributing to the site, I've become friends with the creator, who is a really neat guy.

Where's George?

Site premise:  Let users enter the serial numbers from American paper money.  Tell them to write the site URL on the bills and spend them.  When another person finds a bill, they will be curious about the website, visit, and enter the serial number themselves, and users can watch how currency travels.

I didn't think of this one, but I was hugely into it in the late '90s and into the 2000s.  I used to be ranked #11 on the site, and designed the algorithm (the "George Score") used to determine competitive rankings on the site.  Still wish I'd thought of it, though.