Posted on May 14th, 2012 No comments
I just got a new MacBook Pro with Lion installed, and I love it. I used the Setup Assistant (very similar to Migration Assistant) to transfer my account from my old MacBook Pro to my new one. The process was wonderfully simple, seamless and successfully transferred 99.99% of my files, documents, applications and settings.
After running Software Update to check for the latest system and software updates, I am told I need to download the Epson Printer Driver software update… for a whopping 922 mb!
Seriously, Epson? Sure, I have a fast connection, but not everyone does yet. Forcing your users to download a gigabyte of software when we own at most, one or maybe two of your printers? Why must I keep drivers for every printer you ever made on my computer? That’s just insane.
Your printers are pretty decent but your programmers are seriously lazy.
Here’s another way we end up paying more for everything.
The more printers Epson makes, the more code they stuff into their bloated driver download. The more we download, the more our ISP’s incrementally charge us for bandwidth.
It’s a printer for gosh sakes. Didn’t printer drivers used to fit on a floppy?
Posted on February 25th, 2012 No comments
Things I learned while attempting to build a custom WordPress plugin…
A) The documentation is far from clear.
B) There’s lots of ways of doing it
C) Many blog posts exist around the net with tutorials and examples, and many of them are outdated
D) Proper, logical coding practices and assumptions do not work.
E) You have to just mess with it until it works.
“The world is already flooded with blog posts about how great W3 Total Cache is and how terrific Super Cache is and how fantastic Bat Cache is. Surely one of these will work for me,” I think to myself.
But alas, I tested using each of these plugins on my site and none of them really did the trick. W3 Total Cache actually broke my theme completely and threw errors; Super Cache didn’t help much either. And Bat Cache was just complete rubbish. Discouraged, I decided to write my own plugin from scratch. So, off I go.
I’ll write a custom caching plugin that uses Memcache.
After a week of off-and-on development, I got my caching plugin working extremely well: What was up to 15 seconds is now sub 300ms! Same page, same WP Multi-site, one plugin with only about 300 lines of code. But it only works in a highly-controlled environment. I want to put it up on the WordPress site so others can use it, but before I can release it into the wild, I need to make sure it’s really polite about errors and easy to work with.
Specifically, I have these 3 simple goals to finish it up:
1) If you don’t have the Memcache extension installed when you try to activate the plugin, it needs to display a warning message about missing extension;
2) I want to use the standard h2 error message to display this error to the user; and
3) I want the plugin to remain deactivated if activation fails.
You would think the WordPress Codex, Plugin Resources and Plugin API Documentation would be chock full of great examples for Plugin developers, wouldn’t you? I did, too, but unfortunately this just isn’t the case.
I’ll give examples of A through E above, and hopefully explain how I was able to achieve the proper implementation points of 1, 2 and 3.
How do you handle WordPress Plugin Activation errors gracefully?
This post says to use the wp_die( ) function.
I tried that and it actually does prevent the plugin from getting activated.
This post suggests that you should just let the fatal error happen and display a slightly better error message in a status box. Terrible idea, IMO.
Better to use this. Render red error messages in h2′s like this post shows. I’m trying to combine these concepts into a single, elegant solution and having a real rough time of it.
I think the only way to do it is like HungryCoder says, you have to freaking ob_start and catch the error with output buffering…
Posted on February 18th, 2012 No comments
See, nobody’s perfect. Not even Facebook. (Ha! Far from it, right?)
I’m not able to post an update status at the moment. I thought I’d document it.
Not bad, though, Facebook IT team, I think you have, what, 99.9999% uptime?
Posted on February 4th, 2012 2 comments
Chances are you’ve heard of Memcache. Tons of websites use it to speed up page load times. I often say that Redis is like Memcache on steroids. You may not have heard of Redis, but if you’re using Memcache or APC, you should see how Redis could improve what you’re already doing. If you’re not using Memcache or APC yet, don’t bother – I urge you to take a look at Redis for a bunch of reasons.
First, Memcache is a key-value store only. You set a string value under a string key, and that’s it. With Redis, on the other hand, you have the luxury of several different types of data storage, including keys and values, but Redis also supports hashes, lists, sets and sorted sets.
An example to help explain why this is such a huge improvement. Say you have a big array of data, such as the kind that can come back from a web service request, like a parsed XML file or JSON packet. With Memcache, to store this in memory you have to serialize the data, often base64 encode the data, and then store it on the way in, and then to get a portion of the data back out again, you have to get the whole string, base64 decode it, deserialize it and then you can read from it. These extra steps needlessly chew up compute cycles.
With the same data object stored in a Redis Hash, for example, you can have instant access to the data stored in any key of the hash, you don’t have to grab the whole thing, deserialize it and all that mess. Just a single line of code, and boom, there’s your data. Much more elegant.
Another key reason Redis is superior to Memcache is that when you ask Memcache to store something, it’s in memory and that’s it. If your server goes down and you have to reboot, you have to repopulate your Memcache data over again. If your app has gotten huge, and your cache is huge, this can not only take awhile but puts a huge strain on your database server during this so-called “cache warmup” period. Unlike Memcache, Redis actually stores a copy of its data to a file on disk in the background, so if you stop and start your Redis server, it reloads everything automatically. It does this mind-blowingly fast, too, like millions of keys in seconds.
Finally, Redis supports master-slave configurations that you can use to build high-availability systems more easily. In the upcoming release (everyone is very eager for) Redis Cluster will support sharding out of the box!So, now that you want to dig in and start learning Redis, here are my…
Top 10 Redis Resources Online
NotesYou may be wondering about NoSQL and where Redis fits into this discussion. When people bring up NoSQL, I tend to think of MongoDB. Unlike Memcached and Redis, MongoDB is a general purpose document/object (think JSON) store that (strangely enough) allows you to use some SQL-like commands to retrieve subsets of your data. I think of Redis as a data structure server. You don’t use SQL to talk to Redis, so I guess it could be considered along with other NoSQL solutions. You can compare Redis to MongoDB by going to try.mongodb.org/
- Redis documentation: redis.io/commands
- Try Redis Online: try.redis-db.com
- Redis-DB Google Group List Archives: groups.google.com/group/redis-db
- Antirez (Redis developer Salvatore Sanfilippo’s) blog: antirez.com
- Recent blog posts about Redis: RSS Feed
- Q&A: stackoverflow.com/questions/tagged/redis
- The Little Redis Book – Just released openmymind.net/2012/1/23/The-Little-Redis-Book
- Slides from Redis Tutorial simonwillison.net/static/2010/redis-tutorial
- A Collection of Redis Use Cases www.paperplanes.de/2010/2/16/a_collection_of_redis_use_cases
- My GitHub Page. Chock full of Redis-related project forks. github.com/phpguru
- Bonus: Here’s a slideshow for a Redis 101 talk I gave if you’re interested.
Posted on February 2nd, 2012 No comments
SQL Transactions with Kohana 3 – Note: Transactions were added in Kohana 3.1
StackOverflow Questions tagged Kohana-3 – Look for answers by Kemo, Samsoir and the other Kohana developers
KohanaFramework.org/discussions – Official Kohana Framework user forum
Kohana 3 ORM Tutorials and Samples – Terrific example usage of Kohana’s built-in ORM library
Kohana 3.2 Complete Tutorial – Quickly documented on the site, but comes with downloadable sample application
Useful Kohana 3.2 Modules & Code – Here’s BadSyntax’s link list of useful Kohana modules (Kohana OAuth 2.0, Media Compression, Minion CLI Task Runner, Minion Tasks Migrations, Kohana 3 Project Template, Pagination, Manage Site View Assets)
Blogging about Kohana 3.2? Get the RSS Feed
Posted on January 19th, 2012 No comments
I’ve been running WampServer for years on my trusty Dell XPS running Windows XP Pro. A while back I installed Subversion and got it working with mod_dav and authz_svn to serve multiple repositories, each with their own user and group permissions. It was tricky to set up and there are some finer points that most documentation I read doesn’t address. I followed a few different web resources like this great beginners guide, but ultimately it boils down to the 5 simple steps below.
Just recently I needed to add a new repository. I thought I had done everything right, but when I went to use it for the first time, I got the following errors:
D:\svn\repos>svn mkdir http://localhost:8080/svn/myproject/trunk -m "Trunk" svn: OPTIONS of 'http://localhost:8080/svn/myproject': 200 OK (http://localhost:8080)
D:\svn\repos>svn ls http://localhost:8080/svn/myproject/trunk svn: URL 'http://localhost:8080/svn/myproject/trunk' non-existent in that revision
D:\svn\repos>svn ls http://localhost:8080/svn/myproject svn: Could not open the requested SVN filesystem
If you are getting any of these common errors, this post is for you.
When using svn over http, you have to use Apache’s configuration files to control access to each repository separately. Start by installing Apache, Subversion, and then referencing these three modules in your httpd.conf as follows:
LoadModule dav_module modules/mod_dav.so LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so
Now we’re ready to begin.
1) Add the repository:
#> svnadmin create D:\svn\repos\myproject
*(On Unix systems, chown -R myproject so it is writable by the user Apache runs as)*
2) Edit your httpd.conf (or extras/httpd-vhosts.conf) adding something like this:
<Location /svn/myproject> DAV svn SVNPath d:/svn/repos/myproject AuthType Basic AuthName "My Project SVN Repo" AuthUserFile c:/etc/svn-auth-file Require valid-user AuthzSVNAccessFile c:/etc/svn-acl </Location>
3) Add the project to your svn auth file at c:/etc/svn-acl (it’s referenced in the Location directive in your Apache config.)
[groups] yourgroupname = yourusername, user_b, user_c
[myproject:/] yourusername = rw @yourgroupname = rw
This is what tells Apache which users and groups are allowed to access the path(s) in your repository.
4) Give yourusername an htpasswd (and user_b and user_c)
cd c:/etc/ htpasswd -c svn-auth-file yourusername
*(If that file already exists, omit the -c option)*
5) Finally, restart Apache
httpd -k restart
Then you’re ready to create trunk
#> svn mkdir http://localhost:8080/svn/myproject/trunk -m "Adding trunk" Committed revision 1.
I got the errors shown above when forgetting step one or more of these steps.
Posted on January 16th, 2012 No comments
I’m rather ticked off by politics in general.
One of the main things politicians are doing with more and more regularity is to pass legislation almost behind our backs, every day chipping away, little by little, more and more of our precious freedom. Freedom is what makes America great. Countless patriots have died protecting it. Why is it the present crop of politicians believe it is their duty to protect me from myself?
They’re at it again… with this SOPA / PIPA business. This time, it’s going to hit home for many of you readers. SOPA is the Stop Online Piracy Act and PIPA is the Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act of 2011. Only a politician could write a bill with a name as horrific as its intent. While these bills may have good intentions, in practice it’s just a giant fiasco as usual.
What’s this all about? AmericanCensorship.org puts it bluntly:
Congress is about to pass internet censorship, even though the vast majority of Americans are opposed. We need to kill the bill – PIPA in the Senate and SOPA in the House – to protect our rights to free speech, privacy, and prosperity.
If you like the freedom to read, comment on and post about whatever you like on your blog, or around the internet, don’t let the government shut you down, or tell you what you can or can’t link to, or fine you for failure to comply. It’s just one more way the government is trying to step in and tell you how to run every aspect of your life. It’s time to say NO!
Let your voice be heard! Learn more at the Fight SOPA/PIPA page at WordPress.org.
The internet is fine just how it is, thank you. Now leave it alone!
Think I’m kidding?
Please pass it on…
Posted on January 4th, 2012 No comments
While I’m listening to streaming radio via iTunes or Pandora, I typically try to keep an eye out for new tracks that I like and write the artists names down to search for later.
During this process I end up finding lots of new websites dedicated to electronic music, DJ remixes and so on. Here are a few of my favorites.
On www.hybridized.org search for DjKira aka Nick Lewis.
Join hybridized.org to download basically everything without limits.
On www.sense.fm check out Ashley Bonsall – Into Trance 011
On sense.fm, go to the forums and you can download complete DJ sets for most of the tracks they spin.
Just listen at www.protonradio.com.
With the protonradio free streaming player you get some of the best dance, trance and electronic artists of our time.
Posted on January 2nd, 2012 1 comment
Why do I care about my hosts file?
You’re developing a website, and DNS is still pointing to the old site, or it’s parked at the domain name registrar, and you want to be able to test the site before DNS is updated. Maybe you don’t have access to the DNS registry for the domain but you need to see the site working how it will on the production URL.
This is especially handy when you’re making a WordPress site on your local machine, but you want it to “think” it is the full .com domain, since WordPress stores the site name in the database, this can be a handy way to develop locally, but on a fully-qualified .com domain.
Another example is to create your own “websites” on your local development environment using a “.dev” extension. So you can actually tell your local machine that “mywebsite.dev” is located on the local box. Very handy for web development.
What we can do is trick our computer into thinking that a website is at a different IP address than the one global DNS records are reporting.
This is accomplished by editing your hosts file.
Every system has one. The file is located in different places on Linux, Mac or Windows. On Linux systems, the file is typically located at /etc/hosts.
Note: Before editing your hosts file, make a copy of it as a backup!
Finding your Host File…
…on the Mac
If you have TextMate you can type
mate /private/etc/hostsYou can enter your username & password when you save the file.
If you know how to use vi you can type
sudo vi /private/etc/hostsOr, if you just have the default Mac text editor, follow these steps:
- From Finder’s Go menu, choose Go to Folder…
- Type in /private/etc and click Go
- When the folder opens, right click on hosts and choose Open with other…
- Choose TextEdit from your Applications list
The process is a little more complicated. Here’s how I do it:
- Go to Start -> Run, or otherwise browse your installed program files
- Find Notepad (or Notepad++) and right click
- Choose Run as… or Run as Administrator
- Now in Notepad, navigate to C:\Windows\System 32\drivers\etc\
- Switch to open all files (*.*) not just text files (.txt)
- Now open the file named hosts – it has no extension
What to put into your hosts file
Now that you have your hosts file open, you’ll see some default entries. The file has two columns to it. Whitespace is ignored — that is, you can use tabs or spaces to separate the IP address on the left, from the DNS names on the right. Comments are done by preceding the line with a hash or number symbol (#). Put each entry on its own line.
You’ll also see something like this:
This tells your system that localhost is located at the IP address 127.0.0.1. Entries like these are standard on all systems — don’t change them or your system could become borked (restore your backup!).
You can add new entries above or below the default entries, but I recommend adding a few line breaks ABOVE the default stuff and adding your custom changes at the top of the file.
You can use this technique to point your domain name to the IP address of your hosting account before the DNS record is updated, create staging domains, development domains and so on. Hope you found this useful.
Posted on November 13th, 2011 No comments
I recently set aside an hour to read Robert Sosinski’s blog Starting Amazon EC2 with Mac OS X. What a fantastic guide that is! Thanks, Robert!
Hopefully he won’t mind my slightly modified mirror, below.
Starting Amazon EC2 with Mac OS X
Amazon EC2 (Elastic Cloud Compute) is now one of the top choices for cloud-based deployment. With EC2, you can ramp up to a massive server farm in a matter of minutes, while scaling back down to a single server when things calm down. The benefits are obvious, as you only pay for what you need and you have access to more computing power right when you need it.
EC2 works on the idea of server instances. You start with building one instance, which costs as low as a few cents per hour of operation, and you can even start free (with a t1 micro for a month!).
An instance acts just like a dedicated machine, with full root access and the ability to install any software you choose. You can chose from a variety of sizes and operating systems. An m1.small instance, for example, comes with some pretty competitive system specs including:
1.7 Ghz Xeon CPU
1.75 GB of RAM
160 GB of local storage
250 MB/s network interface
If your first instance gets some heavy traffic, EC2 can build another one automatically for another few cents an hour. Turnkey infrastructure has never been better.
First off, you have to set up your computer so you can connect to and administer your Amazon EC2 account.
If you don’t already have an account at Amazon.com, create one now.
1. Log into your Amazon.com account and then click over to the Amazon AWS subdomain and sign up for EC2. It will be linked to your Amazon.com account.
2. Once signed up, hover over the yellow “Your Web Services Account” button. Here, you should select the “AWS Access Identifiers” link.
3. Login, if prompted.
4. Select the “X.509 certificates” link.
5. Click on the “Create New” link. Amazon will ask you if you are sure, say yes. Doing so will generate two files.
A PEM encoded X.509 certificate named something like cert-xxxxxxx.pem
A PEM encoded RSA private key named something like pk-xxxxxxx.pem
6. Download both of these files.
What is PEM?
PEM (Privacy Enhanced Mail) is a protocol originally developed to secure email. Although rarely deployed for its indented purpose, it’s encoding mechanism for generating certificates is used for quite a few web services including Amazon EC2, PayPal Web Payments Pro and SSH Key Pairs.
7. Download the Amazon EC2 Command-Line Tools.
8. Open the Terminal, go to your home directory, make a new ~/.ec2 directory and open it in the Finder.
$ mkdir .ec2
$ cd .ec2
$ open .
9. Copy the certificate and private key from your download directory into your ~/.ec2 directory.
10. Unzip the Amazon EC2 Command-Line Tools, look in the new directory and move both the bin and lib directory into your ~/.ec2 directory. This directory should now have the following:
The cert-xxxxxxx.pem file
The pk-xxxxxxx.pem file
The bin directory
The lib directory
11. Now, you need to set a few environmental variables. To help yourself out in the future, you will be placing everything necessary in your ~/.bash_profile file. What this will do is automatically setup the Amazon EC2 Command-Line Tools every time you start a Terminal session. Just open ~/.bash_profile in your text editor and add the following to the end of it:
# Setup Amazon EC2 Command-Line Tools
export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem`
export EC2_CERT=`ls $EC2_HOME/cert-*.pem`
12. As you made some changes to your ~/.bash_profile file, you will need to reload it for everything to take effect. Run this:
$ source ~/.bash_profile
Creating and Connecting to a Server Instance
Launching an EC2 Instance from the Command Line on Mac OS X
Now that your computer is set up to work with EC2, it is time to make your server instance.
1. Type this into the Terminal.
$ ec2-describe-images -o amazon
What does the -o option do?
The -o option stands for owner. In this example, you are asking EC2 to describe the images that belong Amazon. To see every image available, give the -a option instead.
2. After a short wait, you will be given a list of available images which should look something like this.
IMAGE ami-20b65349 ec2-public-images/fedora-core4-base.manifest.xml
IMAGE ami-22b6534b ec2-public-images/fedora-core4-mysql.manifest.xml
IMAGE ami-23b6534a ec2-public-images/fedora-core4-apache.manifest.xml
IMAGE ami-25b6534c ec2-public-images/fedora-core4-apache-mysql.manifest.xml
IMAGE ami-26b6534f ec2-public-images/developer-image.manifest.xml
IMAGE ami-2bb65342 ec2-public-images/getting-started.manifest.xml
IMAGE ami-36ff1a5f ec2-public-images/fedora-core6-base-x86_64.manifest.xml
IMAGE ami-bd9d78d4 ec2-public-images/demo-paid-AMI.manifest.xml
Note that you can also do something like
$ ec2-describe-instances -a > ami-list-2011-11.txt
and then search the generated text file for platforms you might need, such as magento or wordpress:
$ cat ami-list-2011-11.txt | grep magento
3. Lets create something simple for now, a Fedora Core 4 machine with Apache. To do this, we need to generate a keypair. This keypair will supply the credentials we need to SSH (Secure Shell) into our server instance. To make a new keypair named ec2-keypair, type the following:
$ ec2-add-keypair ec2-keypair
4. This will create a RSA Private Key and then output it to the screen. You are going to copy this entire key, including the —–BEGIN RSA PRIVATE KEY—– and —–END RSA PRIVATE KEY—– lines to the clipboard. Now, go into your ~/.ec2 directory, make a new file called ec2-keypair, open it in your text editor, paste the entire key and save it.
5. Next, it is important to change the permissions of your keypair file, or else EC2 will not let you connect to it via SSH. To do this, just type the following in your ~/.ec2 directory:
$ chmod 600 ec2-keypair
6. Time to create your new machine. Ensure you are in your ~/.ec2 directory and type the following, substituting “ami-23b6534a” with the id of the image you wish to create.
NOTE: It is important to understand that once you tell EC2 to start creating your server instance, you will start paying 10 cents every hour until you terminate it.
$ ec2-run-instances ami-23b6534a -k ec2-keypair
RESERVATION r-xxxxxxxx xxxxxxxxxxxx default
INSTANCE i-xxxxxxxx ami-23b6534a pending ec2-keypair
7. It may take a bit for EC2 to start your new machine, but you can always check its status by typing:
RESERVATION r-xxxxxxxx xxxxxxxxxxxx default
INSTANCE i-xxxxxxxx ami-23b6534a ec2.compute-1.amazonaws.com
8. Great, your instance is up and running. Take note of your server’s web address (ec2-xx-xxx-xx-xx.compute-1.amazonaws.com) and ID (i-xxxxxxxx) as you will need both of these later in this tutorial. If you forget them, you can always type the ec2-describe-instances command again. Now, lets prep our server by enabling port 22 for SSH access and port 80 so Apache can serve web pages.
$ ec2-authorize default -p 22
PERMISSION default ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0
$ ec2-authorize default -p 80
PERMISSION default ALLOWS tcp 80 80 FROM CIDR 0.0.0.0/0
9. This is the moment you have been waiting for, connecting to your new machine. Open a new web browser window and type in your instance’s web address. You should now see an Apache welcome page.
10. Fantastic, your instance is serving the Apache test page. Now, lets SSH into the machine and check it out. Ensure you are in your ~/.ec2 directory as you will need your ec2-keypair file.
$ ssh -i ec2-keypair firstname.lastname@example.org
11. SSH will ask you if you are sure you want to connect. Just enter yes and you should be connected to your server instance.
__| __|_ ) Rev: 2
_| ( /
Welcome to an EC2 Public Image
__ c __ /etc/ec2/release-notes.txt
Terminating Your Server Instance
Keep in mind that you are still on the meter. Because of this, you should shut down your server instance if you do not plan on using it.
1. Enter the terminate command with your server’s instance ID.
$ ec2-terminate-instances i-xxxxxxxx
INSTANCE i-xxxxxxxx running shutting-down
2. Take a look to see if everything is terminated.
RESERVATION r-xxxxxxxx xxxxxxxxxxxx default
INSTANCE i-xxxxxxxx ami-23b6534a terminated
3. Done and done.
Now that you have an intro to using Amazon EC2 instances on Mac OS X, in step 7 above, you installed the tools. Check out the Amazon AWS Command Line Tools API for all the various ways you can monitor your EC2 instances and other AWS services from the command line. Here are a few more resources:
- AWS EC2 API Documentation
- Finding a suitable AMI – Amazon Machine Image
- Generating a new SSH Key-pair
- Launching an Amazon EC2 Instance
- You can even Launch EC2 Instance via the web with an HTTP Query
Amazing stuff, and more affordable than you might think. See Reserved Instances.
When starting instances, be sure to take note of the Availability Zone you’re starting your instance in. If you end up creating more servers, for example, an Apache server, a MySQL server, a Memcache or Redis Server, you’ll want to make sure you start them all in the same availability zone to avoid unecessary charges and security group headaches. More about AWS Availability Zones and AWS Security Groups over at Rightscale.