Posted on February 25th, 2012 No comments
Things I learned while attempting to build a custom WordPress plugin…
A) The documentation is far from clear.
B) There’s lots of ways of doing it
C) Many blog posts exist around the net with tutorials and examples, and many of them are outdated
D) Proper, logical coding practices and assumptions do not work.
E) You have to just mess with it until it works.
“The world is already flooded with blog posts about how great W3 Total Cache is and how terrific Super Cache is and how fantastic Bat Cache is. Surely one of these will work for me,” I think to myself.
But alas, I tested using each of these plugins on my site and none of them really did the trick. W3 Total Cache actually broke my theme completely and threw errors; Super Cache didn’t help much either. And Bat Cache was just complete rubbish. Discouraged, I decided to write my own plugin from scratch. So, off I go.
I’ll write a custom caching plugin that uses Memcache.
After a week of off-and-on development, I got my caching plugin working extremely well: What was up to 15 seconds is now sub 300ms! Same page, same WP Multi-site, one plugin with only about 300 lines of code. But it only works in a highly-controlled environment. I want to put it up on the WordPress site so others can use it, but before I can release it into the wild, I need to make sure it’s really polite about errors and easy to work with.
Specifically, I have these 3 simple goals to finish it up:
1) If you don’t have the Memcache extension installed when you try to activate the plugin, it needs to display a warning message about missing extension;
2) I want to use the standard h2 error message to display this error to the user; and
3) I want the plugin to remain deactivated if activation fails.
You would think the WordPress Codex, Plugin Resources and Plugin API Documentation would be chock full of great examples for Plugin developers, wouldn’t you? I did, too, but unfortunately this just isn’t the case.
I’ll give examples of A through E above, and hopefully explain how I was able to achieve the proper implementation points of 1, 2 and 3.
How do you handle WordPress Plugin Activation errors gracefully?
This post says to use the wp_die( ) function.
I tried that and it actually does prevent the plugin from getting activated.
This post suggests that you should just let the fatal error happen and display a slightly better error message in a status box. Terrible idea, IMO.
Better to use this. Render red error messages in h2′s like this post shows. I’m trying to combine these concepts into a single, elegant solution and having a real rough time of it.
I think the only way to do it is like HungryCoder says, you have to freaking ob_start and catch the error with output buffering…
Posted on February 18th, 2012 No comments
See, nobody’s perfect. Not even Facebook. (Ha! Far from it, right?)
I’m not able to post an update status at the moment. I thought I’d document it.
Not bad, though, Facebook IT team, I think you have, what, 99.9999% uptime?
Posted on February 4th, 2012 2 comments
Chances are you’ve heard of Memcache. Tons of websites use it to speed up page load times. I often say that Redis is like Memcache on steroids. You may not have heard of Redis, but if you’re using Memcache or APC, you should see how Redis could improve what you’re already doing. If you’re not using Memcache or APC yet, don’t bother – I urge you to take a look at Redis for a bunch of reasons.
First, Memcache is a key-value store only. You set a string value under a string key, and that’s it. With Redis, on the other hand, you have the luxury of several different types of data storage, including keys and values, but Redis also supports hashes, lists, sets and sorted sets.
An example to help explain why this is such a huge improvement. Say you have a big array of data, such as the kind that can come back from a web service request, like a parsed XML file or JSON packet. With Memcache, to store this in memory you have to serialize the data, often base64 encode the data, and then store it on the way in, and then to get a portion of the data back out again, you have to get the whole string, base64 decode it, deserialize it and then you can read from it. These extra steps needlessly chew up compute cycles.
With the same data object stored in a Redis Hash, for example, you can have instant access to the data stored in any key of the hash, you don’t have to grab the whole thing, deserialize it and all that mess. Just a single line of code, and boom, there’s your data. Much more elegant.
Another key reason Redis is superior to Memcache is that when you ask Memcache to store something, it’s in memory and that’s it. If your server goes down and you have to reboot, you have to repopulate your Memcache data over again. If your app has gotten huge, and your cache is huge, this can not only take awhile but puts a huge strain on your database server during this so-called “cache warmup” period. Unlike Memcache, Redis actually stores a copy of its data to a file on disk in the background, so if you stop and start your Redis server, it reloads everything automatically. It does this mind-blowingly fast, too, like millions of keys in seconds.
Finally, Redis supports master-slave configurations that you can use to build high-availability systems more easily. In the upcoming release (everyone is very eager for) Redis Cluster will support sharding out of the box!So, now that you want to dig in and start learning Redis, here are my…
Top 10 Redis Resources Online
NotesYou may be wondering about NoSQL and where Redis fits into this discussion. When people bring up NoSQL, I tend to think of MongoDB. Unlike Memcached and Redis, MongoDB is a general purpose document/object (think JSON) store that (strangely enough) allows you to use some SQL-like commands to retrieve subsets of your data. I think of Redis as a data structure server. You don’t use SQL to talk to Redis, so I guess it could be considered along with other NoSQL solutions. You can compare Redis to MongoDB by going to try.mongodb.org/
- Redis documentation: redis.io/commands
- Try Redis Online: try.redis-db.com
- Redis-DB Google Group List Archives: groups.google.com/group/redis-db
- Antirez (Redis developer Salvatore Sanfilippo’s) blog: antirez.com
- Recent blog posts about Redis: RSS Feed
- Q&A: stackoverflow.com/questions/tagged/redis
- The Little Redis Book – Just released openmymind.net/2012/1/23/The-Little-Redis-Book
- Slides from Redis Tutorial simonwillison.net/static/2010/redis-tutorial
- A Collection of Redis Use Cases www.paperplanes.de/2010/2/16/a_collection_of_redis_use_cases
- My GitHub Page. Chock full of Redis-related project forks. github.com/phpguru
- Bonus: Here’s a slideshow for a Redis 101 talk I gave if you’re interested.
Posted on February 2nd, 2012 No comments
SQL Transactions with Kohana 3 – Note: Transactions were added in Kohana 3.1
StackOverflow Questions tagged Kohana-3 – Look for answers by Kemo, Samsoir and the other Kohana developers
KohanaFramework.org/discussions – Official Kohana Framework user forum
Kohana 3 ORM Tutorials and Samples – Terrific example usage of Kohana’s built-in ORM library
Kohana 3.2 Complete Tutorial – Quickly documented on the site, but comes with downloadable sample application
Useful Kohana 3.2 Modules & Code – Here’s BadSyntax’s link list of useful Kohana modules (Kohana OAuth 2.0, Media Compression, Minion CLI Task Runner, Minion Tasks Migrations, Kohana 3 Project Template, Pagination, Manage Site View Assets)
Blogging about Kohana 3.2? Get the RSS Feed
Posted on January 19th, 2012 No comments
I’ve been running WampServer for years on my trusty Dell XPS running Windows XP Pro. A while back I installed Subversion and got it working with mod_dav and authz_svn to serve multiple repositories, each with their own user and group permissions. It was tricky to set up and there are some finer points that most documentation I read doesn’t address. I followed a few different web resources like this great beginners guide, but ultimately it boils down to the 5 simple steps below.
Just recently I needed to add a new repository. I thought I had done everything right, but when I went to use it for the first time, I got the following errors:
D:\svn\repos>svn mkdir http://localhost:8080/svn/myproject/trunk -m "Trunk" svn: OPTIONS of 'http://localhost:8080/svn/myproject': 200 OK (http://localhost:8080)
D:\svn\repos>svn ls http://localhost:8080/svn/myproject/trunk svn: URL 'http://localhost:8080/svn/myproject/trunk' non-existent in that revision
D:\svn\repos>svn ls http://localhost:8080/svn/myproject svn: Could not open the requested SVN filesystem
If you are getting any of these common errors, this post is for you.
When using svn over http, you have to use Apache’s configuration files to control access to each repository separately. Start by installing Apache, Subversion, and then referencing these three modules in your httpd.conf as follows:
LoadModule dav_module modules/mod_dav.so LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so
Now we’re ready to begin.
1) Add the repository:
#> svnadmin create D:\svn\repos\myproject
*(On Unix systems, chown -R myproject so it is writable by the user Apache runs as)*
2) Edit your httpd.conf (or extras/httpd-vhosts.conf) adding something like this:
<Location /svn/myproject> DAV svn SVNPath d:/svn/repos/myproject AuthType Basic AuthName "My Project SVN Repo" AuthUserFile c:/etc/svn-auth-file Require valid-user AuthzSVNAccessFile c:/etc/svn-acl </Location>
3) Add the project to your svn auth file at c:/etc/svn-acl (it’s referenced in the Location directive in your Apache config.)
[groups] yourgroupname = yourusername, user_b, user_c
[myproject:/] yourusername = rw @yourgroupname = rw
This is what tells Apache which users and groups are allowed to access the path(s) in your repository.
4) Give yourusername an htpasswd (and user_b and user_c)
cd c:/etc/ htpasswd -c svn-auth-file yourusername
*(If that file already exists, omit the -c option)*
5) Finally, restart Apache
httpd -k restart
Then you’re ready to create trunk
#> svn mkdir http://localhost:8080/svn/myproject/trunk -m "Adding trunk" Committed revision 1.
I got the errors shown above when forgetting step one or more of these steps.
Posted on January 16th, 2012 No comments
I’m rather ticked off by politics in general.
One of the main things politicians are doing with more and more regularity is to pass legislation almost behind our backs, every day chipping away, little by little, more and more of our precious freedom. Freedom is what makes America great. Countless patriots have died protecting it. Why is it the present crop of politicians believe it is their duty to protect me from myself?
They’re at it again… with this SOPA / PIPA business. This time, it’s going to hit home for many of you readers. SOPA is the Stop Online Piracy Act and PIPA is the Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act of 2011. Only a politician could write a bill with a name as horrific as its intent. While these bills may have good intentions, in practice it’s just a giant fiasco as usual.
What’s this all about? AmericanCensorship.org puts it bluntly:
Congress is about to pass internet censorship, even though the vast majority of Americans are opposed. We need to kill the bill – PIPA in the Senate and SOPA in the House – to protect our rights to free speech, privacy, and prosperity.
If you like the freedom to read, comment on and post about whatever you like on your blog, or around the internet, don’t let the government shut you down, or tell you what you can or can’t link to, or fine you for failure to comply. It’s just one more way the government is trying to step in and tell you how to run every aspect of your life. It’s time to say NO!
Let your voice be heard! Learn more at the Fight SOPA/PIPA page at WordPress.org.
The internet is fine just how it is, thank you. Now leave it alone!
Think I’m kidding?
Please pass it on…
Posted on January 4th, 2012 No comments
While I’m listening to streaming radio via iTunes or Pandora, I typically try to keep an eye out for new tracks that I like and write the artists names down to search for later.
During this process I end up finding lots of new websites dedicated to electronic music, DJ remixes and so on. Here are a few of my favorites.
On www.hybridized.org search for DjKira aka Nick Lewis.
Join hybridized.org to download basically everything without limits.
On www.sense.fm check out Ashley Bonsall – Into Trance 011
On sense.fm, go to the forums and you can download complete DJ sets for most of the tracks they spin.
Just listen at www.protonradio.com.
With the protonradio free streaming player you get some of the best dance, trance and electronic artists of our time.
Posted on January 2nd, 2012 1 comment
Why do I care about my hosts file?
You’re developing a website, and DNS is still pointing to the old site, or it’s parked at the domain name registrar, and you want to be able to test the site before DNS is updated. Maybe you don’t have access to the DNS registry for the domain but you need to see the site working how it will on the production URL.
This is especially handy when you’re making a WordPress site on your local machine, but you want it to “think” it is the full .com domain, since WordPress stores the site name in the database, this can be a handy way to develop locally, but on a fully-qualified .com domain.
Another example is to create your own “websites” on your local development environment using a “.dev” extension. So you can actually tell your local machine that “mywebsite.dev” is located on the local box. Very handy for web development.
What we can do is trick our computer into thinking that a website is at a different IP address than the one global DNS records are reporting.
This is accomplished by editing your hosts file.
Every system has one. The file is located in different places on Linux, Mac or Windows. On Linux systems, the file is typically located at /etc/hosts.
Note: Before editing your hosts file, make a copy of it as a backup!
Finding your Host File…
…on the Mac
If you have TextMate you can type
mate /private/etc/hostsYou can enter your username & password when you save the file.
If you know how to use vi you can type
sudo vi /private/etc/hostsOr, if you just have the default Mac text editor, follow these steps:
- From Finder’s Go menu, choose Go to Folder…
- Type in /private/etc and click Go
- When the folder opens, right click on hosts and choose Open with other…
- Choose TextEdit from your Applications list
The process is a little more complicated. Here’s how I do it:
- Go to Start -> Run, or otherwise browse your installed program files
- Find Notepad (or Notepad++) and right click
- Choose Run as… or Run as Administrator
- Now in Notepad, navigate to C:\Windows\System 32\drivers\etc\
- Switch to open all files (*.*) not just text files (.txt)
- Now open the file named hosts – it has no extension
What to put into your hosts file
Now that you have your hosts file open, you’ll see some default entries. The file has two columns to it. Whitespace is ignored — that is, you can use tabs or spaces to separate the IP address on the left, from the DNS names on the right. Comments are done by preceding the line with a hash or number symbol (#). Put each entry on its own line.
You’ll also see something like this:
This tells your system that localhost is located at the IP address 127.0.0.1. Entries like these are standard on all systems — don’t change them or your system could become borked (restore your backup!).
You can add new entries above or below the default entries, but I recommend adding a few line breaks ABOVE the default stuff and adding your custom changes at the top of the file.
You can use this technique to point your domain name to the IP address of your hosting account before the DNS record is updated, create staging domains, development domains and so on. Hope you found this useful.
Posted on November 13th, 2011 No comments
I recently set aside an hour to read Robert Sosinski’s blog Starting Amazon EC2 with Mac OS X. What a fantastic guide that is! Thanks, Robert!
Hopefully he won’t mind my slightly modified mirror, below.
Starting Amazon EC2 with Mac OS X
Amazon EC2 (Elastic Cloud Compute) is now one of the top choices for cloud-based deployment. With EC2, you can ramp up to a massive server farm in a matter of minutes, while scaling back down to a single server when things calm down. The benefits are obvious, as you only pay for what you need and you have access to more computing power right when you need it.
EC2 works on the idea of server instances. You start with building one instance, which costs as low as a few cents per hour of operation, and you can even start free (with a t1 micro for a month!).
An instance acts just like a dedicated machine, with full root access and the ability to install any software you choose. You can chose from a variety of sizes and operating systems. An m1.small instance, for example, comes with some pretty competitive system specs including:
1.7 Ghz Xeon CPU
1.75 GB of RAM
160 GB of local storage
250 MB/s network interface
If your first instance gets some heavy traffic, EC2 can build another one automatically for another few cents an hour. Turnkey infrastructure has never been better.
First off, you have to set up your computer so you can connect to and administer your Amazon EC2 account.
If you don’t already have an account at Amazon.com, create one now.
1. Log into your Amazon.com account and then click over to the Amazon AWS subdomain and sign up for EC2. It will be linked to your Amazon.com account.
2. Once signed up, hover over the yellow “Your Web Services Account” button. Here, you should select the “AWS Access Identifiers” link.
3. Login, if prompted.
4. Select the “X.509 certificates” link.
5. Click on the “Create New” link. Amazon will ask you if you are sure, say yes. Doing so will generate two files.
A PEM encoded X.509 certificate named something like cert-xxxxxxx.pem
A PEM encoded RSA private key named something like pk-xxxxxxx.pem
6. Download both of these files.
What is PEM?
PEM (Privacy Enhanced Mail) is a protocol originally developed to secure email. Although rarely deployed for its indented purpose, it’s encoding mechanism for generating certificates is used for quite a few web services including Amazon EC2, PayPal Web Payments Pro and SSH Key Pairs.
7. Download the Amazon EC2 Command-Line Tools.
8. Open the Terminal, go to your home directory, make a new ~/.ec2 directory and open it in the Finder.
$ mkdir .ec2
$ cd .ec2
$ open .
9. Copy the certificate and private key from your download directory into your ~/.ec2 directory.
10. Unzip the Amazon EC2 Command-Line Tools, look in the new directory and move both the bin and lib directory into your ~/.ec2 directory. This directory should now have the following:
The cert-xxxxxxx.pem file
The pk-xxxxxxx.pem file
The bin directory
The lib directory
11. Now, you need to set a few environmental variables. To help yourself out in the future, you will be placing everything necessary in your ~/.bash_profile file. What this will do is automatically setup the Amazon EC2 Command-Line Tools every time you start a Terminal session. Just open ~/.bash_profile in your text editor and add the following to the end of it:
# Setup Amazon EC2 Command-Line Tools
export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem`
export EC2_CERT=`ls $EC2_HOME/cert-*.pem`
12. As you made some changes to your ~/.bash_profile file, you will need to reload it for everything to take effect. Run this:
$ source ~/.bash_profile
Creating and Connecting to a Server Instance
Launching an EC2 Instance from the Command Line on Mac OS X
Now that your computer is set up to work with EC2, it is time to make your server instance.
1. Type this into the Terminal.
$ ec2-describe-images -o amazon
What does the -o option do?
The -o option stands for owner. In this example, you are asking EC2 to describe the images that belong Amazon. To see every image available, give the -a option instead.
2. After a short wait, you will be given a list of available images which should look something like this.
IMAGE ami-20b65349 ec2-public-images/fedora-core4-base.manifest.xml
IMAGE ami-22b6534b ec2-public-images/fedora-core4-mysql.manifest.xml
IMAGE ami-23b6534a ec2-public-images/fedora-core4-apache.manifest.xml
IMAGE ami-25b6534c ec2-public-images/fedora-core4-apache-mysql.manifest.xml
IMAGE ami-26b6534f ec2-public-images/developer-image.manifest.xml
IMAGE ami-2bb65342 ec2-public-images/getting-started.manifest.xml
IMAGE ami-36ff1a5f ec2-public-images/fedora-core6-base-x86_64.manifest.xml
IMAGE ami-bd9d78d4 ec2-public-images/demo-paid-AMI.manifest.xml
Note that you can also do something like
$ ec2-describe-instances -a > ami-list-2011-11.txt
and then search the generated text file for platforms you might need, such as magento or wordpress:
$ cat ami-list-2011-11.txt | grep magento
3. Lets create something simple for now, a Fedora Core 4 machine with Apache. To do this, we need to generate a keypair. This keypair will supply the credentials we need to SSH (Secure Shell) into our server instance. To make a new keypair named ec2-keypair, type the following:
$ ec2-add-keypair ec2-keypair
4. This will create a RSA Private Key and then output it to the screen. You are going to copy this entire key, including the —–BEGIN RSA PRIVATE KEY—– and —–END RSA PRIVATE KEY—– lines to the clipboard. Now, go into your ~/.ec2 directory, make a new file called ec2-keypair, open it in your text editor, paste the entire key and save it.
5. Next, it is important to change the permissions of your keypair file, or else EC2 will not let you connect to it via SSH. To do this, just type the following in your ~/.ec2 directory:
$ chmod 600 ec2-keypair
6. Time to create your new machine. Ensure you are in your ~/.ec2 directory and type the following, substituting “ami-23b6534a” with the id of the image you wish to create.
NOTE: It is important to understand that once you tell EC2 to start creating your server instance, you will start paying 10 cents every hour until you terminate it.
$ ec2-run-instances ami-23b6534a -k ec2-keypair
RESERVATION r-xxxxxxxx xxxxxxxxxxxx default
INSTANCE i-xxxxxxxx ami-23b6534a pending ec2-keypair
7. It may take a bit for EC2 to start your new machine, but you can always check its status by typing:
RESERVATION r-xxxxxxxx xxxxxxxxxxxx default
INSTANCE i-xxxxxxxx ami-23b6534a ec2.compute-1.amazonaws.com
8. Great, your instance is up and running. Take note of your server’s web address (ec2-xx-xxx-xx-xx.compute-1.amazonaws.com) and ID (i-xxxxxxxx) as you will need both of these later in this tutorial. If you forget them, you can always type the ec2-describe-instances command again. Now, lets prep our server by enabling port 22 for SSH access and port 80 so Apache can serve web pages.
$ ec2-authorize default -p 22
PERMISSION default ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0
$ ec2-authorize default -p 80
PERMISSION default ALLOWS tcp 80 80 FROM CIDR 0.0.0.0/0
9. This is the moment you have been waiting for, connecting to your new machine. Open a new web browser window and type in your instance’s web address. You should now see an Apache welcome page.
10. Fantastic, your instance is serving the Apache test page. Now, lets SSH into the machine and check it out. Ensure you are in your ~/.ec2 directory as you will need your ec2-keypair file.
$ ssh -i ec2-keypair firstname.lastname@example.org
11. SSH will ask you if you are sure you want to connect. Just enter yes and you should be connected to your server instance.
__| __|_ ) Rev: 2
_| ( /
Welcome to an EC2 Public Image
__ c __ /etc/ec2/release-notes.txt
Terminating Your Server Instance
Keep in mind that you are still on the meter. Because of this, you should shut down your server instance if you do not plan on using it.
1. Enter the terminate command with your server’s instance ID.
$ ec2-terminate-instances i-xxxxxxxx
INSTANCE i-xxxxxxxx running shutting-down
2. Take a look to see if everything is terminated.
RESERVATION r-xxxxxxxx xxxxxxxxxxxx default
INSTANCE i-xxxxxxxx ami-23b6534a terminated
3. Done and done.
Now that you have an intro to using Amazon EC2 instances on Mac OS X, in step 7 above, you installed the tools. Check out the Amazon AWS Command Line Tools API for all the various ways you can monitor your EC2 instances and other AWS services from the command line. Here are a few more resources:
- AWS EC2 API Documentation
- Finding a suitable AMI – Amazon Machine Image
- Generating a new SSH Key-pair
- Launching an Amazon EC2 Instance
- You can even Launch EC2 Instance via the web with an HTTP Query
Amazing stuff, and more affordable than you might think. See Reserved Instances.
When starting instances, be sure to take note of the Availability Zone you’re starting your instance in. If you end up creating more servers, for example, an Apache server, a MySQL server, a Memcache or Redis Server, you’ll want to make sure you start them all in the same availability zone to avoid unecessary charges and security group headaches. More about AWS Availability Zones and AWS Security Groups over at Rightscale.
Posted on November 4th, 2011 No comments
This is a slightly modified mirror of http://homepage.mac.com/kelleherk/iblog/C711669388/E351220100/index.html
OK, so you have a nice replication setup, but how do you know it is actually working, and what do you do when it stops? This short article shows how to check and quickly fix replication that has stopped. This procedure takes 2 minutes and can be done remotely on the command line.
To check if replication is working, log into the slave and execute:
> SHOW SLAVE STATUS;
The result is something like this:
(email@example.com) (none)> show slave status\G
*************************** 1. row ***************************
If either of these is NO, like this, then replication is stopped:
The Last_Errno and Last_Error might give you a clue as to what went wrong.
If all seems OK, you can also confirm further that everything is working by logging into the master and executing SHOW MASTER STATUS and comparing the binary log and exec position.
If a recovery is required, often, you can do a quick recovery by seeing the point at which the slave stopped and then simply resetting and restarting the slave at that point in the master binary logs. If this quick procedure fails, then you will have to perform the more time-consuming full copy from the master and restart replication like you did when you initially set it up.
Quick Reset Procedure
1) First, issue a STOP SLAVE
> STOP SLAVE;
2) Important: Next, issue a SHOW SLAVE STATUS and get the stopping point information
> SHOW SLAVE STATUS;
At this stage you must make note of the result of the SHOW SLAVE STATUS. If you don’t have this info on hand, you will not be able to complete the procedure. Usually I am using a terminal program and remotely accessing the server, so I always copy the result from the screen and paste it into a text editor on my machine.
The information we need from that result is as follows:
… and these are optional if using SSL …..
3) Next, issue a RESET SLAVE:
> RESET SLAVE;
4) Now we issue a CHANGE MASTER command, for example (substituting your own values of course):
> change master to
The last 4 master_ssl parameters are not required if not replicating over SSL.
5) Finally, start the slave:
> START SLAVE;
And check again with SHOW SLAVE STATUS to make sure we are replicating again.
Note: If you are getting repeated situations where replication is getting errors and stopping, then you need to reassess your setup. It is VERY important to have BOTH master and slave on Uninterruptable Power Supplies if that is not obvious! If you have recovered and still get errors, then a full recovery by getting a full dump from master and a scratch slave setup is necessary.
You can of course write some scripts to perform the slave running check every 5 minutes and email you if it has had an error and stopped replicating. Jeremy Zawodny in his book discusses ways to automate slave replication checking and alert you when replication has stopped or fallen too far behind.
What follows is a slightly edited mirror of http://homepage.mac.com/kelleherk/iblog/C711669388/E351220100/index.html
For the last year kelleherk had avoided this because kelleherk expected it would be hard. But replication is really is not that hard after all …. and it makes backing up very easy avoiding special scripts, sql dumps, etc. as well as providing peace of mind for unrecoverable hard drive failure of your master server knowing that you have a perfect recent if not exact copy of ALL databases on the slave. I was lucky enough to learn from some really amazing MySQL admins, and reading kelleherk’s post helped me remember how to do it.
Hardware/software scenario for these instructions was Apple XServes running OS X Server 10.3.4 (Darwin Unix version 7.4.0) and MySQL 4.0.20 standard binary installation. MySQL resides at /usr/local/mysql and the global my.cnf file is at /etc/my.cnf. I use the default (bash) shell.
The master has been running happily on its own dedicated XServe (serving mostly WebObjects applications) and needs a backup solution that takes an exact copy once per night of the master server without ever shutting down the master. Another XServe that acts as a fileserver has plenty of capacity to become a MySQL slave. All the commands on this post also work fine on MySQL 5.1 on Ubuntu Server.
These instructions involve shutting down the maser one time long enough to copy the contents of the mysql/data directory across the network to the slave. This was quick in my case since all the servers share the same gigabit subnet and the databases were not too large. You also need root privileges on both mysql and the servers themselves. All command line args beginning with # below signify that server root user is logged in. If not logged in as root, you need to constantly do sudo and enter password which adds unnecessary fluff to these instructions. But be careful ….. root has “no questions asked’ power!
IMPORTANT: This also assumes that /usr/local/mysql/bin is the leftmost path in your shell PATH variable. This is required to make sure your mysql commands work on the binary installation and not the “bundled” mysql that ships preinstalled in Darwin and NOT installed in /usr/local/mysql.
1) Preparing the slave
2) Prepare the master
3) Shut down the master MySQL
4) Copy the data directory from master to slave
5) Restart the master and verify the creation of a binary log
6) Finish configuring the slave
7) Start the slave and verify replication
Simply download the binary installer package and run the installer for mysql and then run the installer for the Startup Item. DO NOT configure or startup mysqld yet!
Login to slave as root
% su root (locally) or % ssh root @slave-ip-address (remotely)
If necessary, edit /etc/profile so that your PATH variable begins with /usr/local/mysql/bin and then log out and in again
Delete the mysql newly installed data directory since we will be copying over the master’s data directory. WARNING! You don’t necessarily need to use this method (faster, probably) unless you really know what you’re doing! You can follow the steps on this page for restarting the slave if it is not that far behind and just needs to catch up.
# cd /usr/local/mysql
# rm -r ./data
Decide right now on a special user and password for replication, let’s say repluser and replpassword.
Also decide right now on a special user and password for backup shutdown/startup, let’s say backuser and backpassword
(You can substitute your own passwords!)
Next prepare the slave config file
# pico /etc/my.cnf
Enter the following slave configuration parameters
# this default slave mysql user only has SHUTDOWN privilege allowing the backup script on the
# slave to shutdown mysqld without providing a username and password
user = backuser
password = backpassword
# I use the IP address of the server for server-id
#log-bin = /var/db/repl/binary-log
# Using last portion of this machines IP for server-id
server-id = 143
# This is the master details (NOTE master-host is MASTER IP address)
master-host = 192.168.1.241
master-user = repluser
master-password = replpassword
master-port = 3306
Next save file and close pico
[ctrl-o] and [ctrl-x]
…. and that’s it for now on the slave. Read below to finish with the slave setup.
Preparing the Master
Login to master as root. Let mysql server continue running for now.
Next create a directory owned by mysql user for storing the master binary log (we don’t want to have it in the default location of the data directory.
# cd /var/db
# mkdir repl
# chown -R mysql:wheel repl
Next update the master my.cnf file using pico text editor
# pico /etc/my.cnf
Now add these lines to the [mysqld] parameters
# This turns on binary logging and determines the pathname of the log
log-bin = /var/db/repl/binary-log
# server-id should be a unique id between 1 and 2^32 - 1
# I used the last portion of the IP address of this server
server-id = 241
Next save file and close pico
[ctrl-o] and [ctrl-x]
DO NOT restart the master mysqld yet! We want this my.cnf to be read only after we stop and copy the master data to the slave so that replication begins on identical copies of the databases.
Now log into mysql to add the repluser and backuser. Note that while we are creating these two users on the master, they will really be used on the slave ….. but remember that the master will soon be copied to the slave just before we begin replicating and these users and privileges will be mirrored on the slave after we copy over.
Note the following GRANT statements assume your subnet with the mysql servers have IP addresses beginning with 192.168.1. Change as appropriate for your situation.
# mysql -u root -p
mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'repluser'@'192.168.1.%' IDENTIFIED BY 'replpassword';
mysql> GRANT SHUTDOWN ON *.* TO 'backuser'@'192.168.1.%' IDENTIFIED BY 'backpassword';
Check connections and decide when to shutdown the mysql server
mysql> SHOW PROCESSLIST;
Shutting Down the Master
When ready to shutdown…
# mysqladmin -u root -p shutdown
Copying the mysql data directory to the slave
When mysqld has ended we will use scp to copy the data folder to the slave
# scp -r /usr/local/mysql/data root@slave-ip-address:/usr/local/mysql
When finished copying we can restart the master. And don’t worry about the slave which is still not started. If the master is binary logging after we restart, the slave will read the log and catch up to synchronize.
Restarting the Master
# mysqld_safe &
Press return key.
Now check if binary logging is working
# cd /var/db/repl
# ls -al
You should see a file named binary-log.001 …. if not you have got to troubleshoot it and fix it and then delete the slave data directory and shutdown the master and copy over the data directory again before restarting. The only problem I had when I first did this was that I had a binary log name in my cnf file that mysql just did not like, so initially use “binary-log” which is sure to work.
drwxr-xr-x 5 mysql wheel 170 24 Jun 09:32 .
drwxr-xr-x 23 root wheel 782 24 Jun 09:32 ..
-rw-rw---- 1 mysql wheel 20041 23 Jun 10:35 binary-log.001
-rw-rw---- 1 mysql wheel 56 24 Jun 09:33 binary-log.index
You can examine the file like this:
# mysqlbinlog binary-log.001
If you wish log into mysql and create a test database, add a table and add a record. then log out and examine the binary log and you will see the SQL commands in there ready for the slave to execute.
If binary logging is working its time to finish with the slave
Finish configuring the slave
First fix privileges on the data folder that we copied over
# cd /usr/local/mysql
# chown -R mysql:wheel data
Verify privileges if you wish…
# ls -al ./data
Now start the slave…
# mysqld_safe &
When the slave has started, log into it and check that the test SQL stuff you did on the master has replicated. BUT DO NOT run SQL statements on the slave yourself that would jeopardize the integrity of the slave being an exact copy. If you wish create a read only user on the MASTER and then log into the slave using the read only user to verify replication.
In addition you can go into the /usr/local/mysql/data directory and you will see the relay log. Also check out the online MASTER and SLAVE SQL commands for checking status etc.
If it’s working then congratulations! if not …. then google it.