Posted on June 2nd, 2012 No comments
A few years back, I found the iPhone app Runmeter. It keeps getting better, too, and is still one of the best $5 I ever spent. Very cool app. When you finish a run, skate or walk, it sends you a j.mp short link to a Google map of your exercise with all the stats it captured via GPS.
As a web developer interested in link shortening services, I instantly signed up to use j.mp to shorten my links.
Even though around September of 2009 bit.ly encouraged their users to switch to j.mp to make their URLs even shorter, they have now appeared to have tabled the j.mp brand along with the excellent j.mp sidebar. This wouldn’t be a big deal if the new bit.ly services were the same or better, but they’re far from it. The new bit marker is annoying and takes five times longer to shorten a link, tries to integrate with sharing sites for you and lots of other annoyances.
So they deprecated the j.mp sidebar, which as of the time of this writing is still working, but you cannot find it anywhere on the web! So frustrating.
To combat this problem, here are some instructions below — just sign up for bit.ly if you don’t already have an account, and follow the steps below.
bit.lyj.mp sidebar bookmarklet
Drag this link to your browser’s bookmark toolbar
bit.lyj.mp sidebar manual creation method
1. Create a new blank bookmark in your bookmark toolbar folder and name it j.mp sidebar (or bit.ly sidebar if you prefer)
2. Paste in the following for the URL
Posted on May 16th, 2012 33 comments
If you have a need to compile Memcache or wget on Mac OS X Lion and are wondering why you are getting the error
no acceptable C compiler found in $PATH
on Mac OS X Lion, you’re not alone.
Thanks to this post, I was able to fix my problem. Here are the steps.
- Run App Store
- Search for Xcode – it’s a free install from Apple
- Wait for awhile. Took 30min to download for me on a 20mbps connection
- Authenticate and let Xcode install. Once Xcode is installed you may be thinking you’re done. You would be wrong!
- Launch Xcode and run the mobile toolkit update (you can’t skip it, deal with it)
- Go to Xcode Preferences or press ⌘, (Command-comma)
- Click the Downloads tab -> Components list as shown below
- On the last row of the available downloads are the Command Line tools. Install them.
You should be good to go after that!
Posted on February 4th, 2012 2 comments
Chances are you’ve heard of Memcache. Tons of websites use it to speed up page load times. I often say that Redis is like Memcache on steroids. You may not have heard of Redis, but if you’re using Memcache or APC, you should see how Redis could improve what you’re already doing. If you’re not using Memcache or APC yet, don’t bother – I urge you to take a look at Redis for a bunch of reasons.
First, Memcache is a key-value store only. You set a string value under a string key, and that’s it. With Redis, on the other hand, you have the luxury of several different types of data storage, including keys and values, but Redis also supports hashes, lists, sets and sorted sets.
An example to help explain why this is such a huge improvement. Say you have a big array of data, such as the kind that can come back from a web service request, like a parsed XML file or JSON packet. With Memcache, to store this in memory you have to serialize the data, often base64 encode the data, and then store it on the way in, and then to get a portion of the data back out again, you have to get the whole string, base64 decode it, deserialize it and then you can read from it. These extra steps needlessly chew up compute cycles.
With the same data object stored in a Redis Hash, for example, you can have instant access to the data stored in any key of the hash, you don’t have to grab the whole thing, deserialize it and all that mess. Just a single line of code, and boom, there’s your data. Much more elegant.
Another key reason Redis is superior to Memcache is that when you ask Memcache to store something, it’s in memory and that’s it. If your server goes down and you have to reboot, you have to repopulate your Memcache data over again. If your app has gotten huge, and your cache is huge, this can not only take awhile but puts a huge strain on your database server during this so-called “cache warmup” period. Unlike Memcache, Redis actually stores a copy of its data to a file on disk in the background, so if you stop and start your Redis server, it reloads everything automatically. It does this mind-blowingly fast, too, like millions of keys in seconds.
Finally, Redis supports master-slave configurations that you can use to build high-availability systems more easily. In the upcoming release (everyone is very eager for) Redis Cluster will support sharding out of the box!So, now that you want to dig in and start learning Redis, here are my…
Top 10 Redis Resources Online
NotesYou may be wondering about NoSQL and where Redis fits into this discussion. When people bring up NoSQL, I tend to think of MongoDB. Unlike Memcached and Redis, MongoDB is a general purpose document/object (think JSON) store that (strangely enough) allows you to use some SQL-like commands to retrieve subsets of your data. I think of Redis as a data structure server. You don’t use SQL to talk to Redis, so I guess it could be considered along with other NoSQL solutions. You can compare Redis to MongoDB by going to try.mongodb.org/
- Redis documentation: redis.io/commands
- Try Redis Online: try.redis-db.com
- Redis-DB Google Group List Archives: groups.google.com/group/redis-db
- Antirez (Redis developer Salvatore Sanfilippo’s) blog: antirez.com
- Recent blog posts about Redis: RSS Feed
- Q&A: stackoverflow.com/questions/tagged/redis
- The Little Redis Book – Just released openmymind.net/2012/1/23/The-Little-Redis-Book
- Slides from Redis Tutorial simonwillison.net/static/2010/redis-tutorial
- A Collection of Redis Use Cases www.paperplanes.de/2010/2/16/a_collection_of_redis_use_cases
- My GitHub Page. Chock full of Redis-related project forks. github.com/phpguru
- Bonus: Here’s a slideshow for a Redis 101 talk I gave if you’re interested.
Posted on February 2nd, 2012 No comments
SQL Transactions with Kohana 3 – Note: Transactions were added in Kohana 3.1
StackOverflow Questions tagged Kohana-3 – Look for answers by Kemo, Samsoir and the other Kohana developers
KohanaFramework.org/discussions – Official Kohana Framework user forum
Kohana 3 ORM Tutorials and Samples – Terrific example usage of Kohana’s built-in ORM library
Kohana 3.2 Complete Tutorial – Quickly documented on the site, but comes with downloadable sample application
Useful Kohana 3.2 Modules & Code – Here’s BadSyntax’s link list of useful Kohana modules (Kohana OAuth 2.0, Media Compression, Minion CLI Task Runner, Minion Tasks Migrations, Kohana 3 Project Template, Pagination, Manage Site View Assets)
Blogging about Kohana 3.2? Get the RSS Feed
Posted on January 19th, 2012 No comments
I’ve been running WampServer for years on my trusty Dell XPS running Windows XP Pro. A while back I installed Subversion and got it working with mod_dav and authz_svn to serve multiple repositories, each with their own user and group permissions. It was tricky to set up and there are some finer points that most documentation I read doesn’t address. I followed a few different web resources like this great beginners guide, but ultimately it boils down to the 5 simple steps below.
Just recently I needed to add a new repository. I thought I had done everything right, but when I went to use it for the first time, I got the following errors:
D:\svn\repos>svn mkdir http://localhost:8080/svn/myproject/trunk -m "Trunk" svn: OPTIONS of 'http://localhost:8080/svn/myproject': 200 OK (http://localhost:8080)
D:\svn\repos>svn ls http://localhost:8080/svn/myproject/trunk svn: URL 'http://localhost:8080/svn/myproject/trunk' non-existent in that revision
D:\svn\repos>svn ls http://localhost:8080/svn/myproject svn: Could not open the requested SVN filesystem
If you are getting any of these common errors, this post is for you.
When using svn over http, you have to use Apache’s configuration files to control access to each repository separately. Start by installing Apache, Subversion, and then referencing these three modules in your httpd.conf as follows:
LoadModule dav_module modules/mod_dav.so LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so
Now we’re ready to begin.
1) Add the repository:
#> svnadmin create D:\svn\repos\myproject
*(On Unix systems, chown -R myproject so it is writable by the user Apache runs as)*
2) Edit your httpd.conf (or extras/httpd-vhosts.conf) adding something like this:
<Location /svn/myproject> DAV svn SVNPath d:/svn/repos/myproject AuthType Basic AuthName "My Project SVN Repo" AuthUserFile c:/etc/svn-auth-file Require valid-user AuthzSVNAccessFile c:/etc/svn-acl </Location>
3) Add the project to your svn auth file at c:/etc/svn-acl (it’s referenced in the Location directive in your Apache config.)
[groups] yourgroupname = yourusername, user_b, user_c
[myproject:/] yourusername = rw @yourgroupname = rw
This is what tells Apache which users and groups are allowed to access the path(s) in your repository.
4) Give yourusername an htpasswd (and user_b and user_c)
cd c:/etc/ htpasswd -c svn-auth-file yourusername
*(If that file already exists, omit the -c option)*
5) Finally, restart Apache
httpd -k restart
Then you’re ready to create trunk
#> svn mkdir http://localhost:8080/svn/myproject/trunk -m "Adding trunk" Committed revision 1.
I got the errors shown above when forgetting step one or more of these steps.
Posted on January 2nd, 2012 1 comment
Why do I care about my hosts file?
You’re developing a website, and DNS is still pointing to the old site, or it’s parked at the domain name registrar, and you want to be able to test the site before DNS is updated. Maybe you don’t have access to the DNS registry for the domain but you need to see the site working how it will on the production URL.
This is especially handy when you’re making a WordPress site on your local machine, but you want it to “think” it is the full .com domain, since WordPress stores the site name in the database, this can be a handy way to develop locally, but on a fully-qualified .com domain.
Another example is to create your own “websites” on your local development environment using a “.dev” extension. So you can actually tell your local machine that “mywebsite.dev” is located on the local box. Very handy for web development.
What we can do is trick our computer into thinking that a website is at a different IP address than the one global DNS records are reporting.
This is accomplished by editing your hosts file.
Every system has one. The file is located in different places on Linux, Mac or Windows. On Linux systems, the file is typically located at /etc/hosts.
Note: Before editing your hosts file, make a copy of it as a backup!
Finding your Host File…
…on the Mac
If you have TextMate you can type
mate /private/etc/hostsYou can enter your username & password when you save the file.
If you know how to use vi you can type
sudo vi /private/etc/hostsOr, if you just have the default Mac text editor, follow these steps:
- From Finder’s Go menu, choose Go to Folder…
- Type in /private/etc and click Go
- When the folder opens, right click on hosts and choose Open with other…
- Choose TextEdit from your Applications list
The process is a little more complicated. Here’s how I do it:
- Go to Start -> Run, or otherwise browse your installed program files
- Find Notepad (or Notepad++) and right click
- Choose Run as… or Run as Administrator
- Now in Notepad, navigate to C:\Windows\System 32\drivers\etc\
- Switch to open all files (*.*) not just text files (.txt)
- Now open the file named hosts – it has no extension
What to put into your hosts file
Now that you have your hosts file open, you’ll see some default entries. The file has two columns to it. Whitespace is ignored — that is, you can use tabs or spaces to separate the IP address on the left, from the DNS names on the right. Comments are done by preceding the line with a hash or number symbol (#). Put each entry on its own line.
You’ll also see something like this:
This tells your system that localhost is located at the IP address 127.0.0.1. Entries like these are standard on all systems — don’t change them or your system could become borked (restore your backup!).
You can add new entries above or below the default entries, but I recommend adding a few line breaks ABOVE the default stuff and adding your custom changes at the top of the file.
You can use this technique to point your domain name to the IP address of your hosting account before the DNS record is updated, create staging domains, development domains and so on. Hope you found this useful.
Posted on November 13th, 2011 No comments
I recently set aside an hour to read Robert Sosinski’s blog Starting Amazon EC2 with Mac OS X. What a fantastic guide that is! Thanks, Robert!
Hopefully he won’t mind my slightly modified mirror, below.
Starting Amazon EC2 with Mac OS X
Amazon EC2 (Elastic Cloud Compute) is now one of the top choices for cloud-based deployment. With EC2, you can ramp up to a massive server farm in a matter of minutes, while scaling back down to a single server when things calm down. The benefits are obvious, as you only pay for what you need and you have access to more computing power right when you need it.
EC2 works on the idea of server instances. You start with building one instance, which costs as low as a few cents per hour of operation, and you can even start free (with a t1 micro for a month!).
An instance acts just like a dedicated machine, with full root access and the ability to install any software you choose. You can chose from a variety of sizes and operating systems. An m1.small instance, for example, comes with some pretty competitive system specs including:
1.7 Ghz Xeon CPU
1.75 GB of RAM
160 GB of local storage
250 MB/s network interface
If your first instance gets some heavy traffic, EC2 can build another one automatically for another few cents an hour. Turnkey infrastructure has never been better.
First off, you have to set up your computer so you can connect to and administer your Amazon EC2 account.
If you don’t already have an account at Amazon.com, create one now.
1. Log into your Amazon.com account and then click over to the Amazon AWS subdomain and sign up for EC2. It will be linked to your Amazon.com account.
2. Once signed up, hover over the yellow “Your Web Services Account” button. Here, you should select the “AWS Access Identifiers” link.
3. Login, if prompted.
4. Select the “X.509 certificates” link.
5. Click on the “Create New” link. Amazon will ask you if you are sure, say yes. Doing so will generate two files.
A PEM encoded X.509 certificate named something like cert-xxxxxxx.pem
A PEM encoded RSA private key named something like pk-xxxxxxx.pem
6. Download both of these files.
What is PEM?
PEM (Privacy Enhanced Mail) is a protocol originally developed to secure email. Although rarely deployed for its indented purpose, it’s encoding mechanism for generating certificates is used for quite a few web services including Amazon EC2, PayPal Web Payments Pro and SSH Key Pairs.
7. Download the Amazon EC2 Command-Line Tools.
8. Open the Terminal, go to your home directory, make a new ~/.ec2 directory and open it in the Finder.
$ mkdir .ec2
$ cd .ec2
$ open .
9. Copy the certificate and private key from your download directory into your ~/.ec2 directory.
10. Unzip the Amazon EC2 Command-Line Tools, look in the new directory and move both the bin and lib directory into your ~/.ec2 directory. This directory should now have the following:
The cert-xxxxxxx.pem file
The pk-xxxxxxx.pem file
The bin directory
The lib directory
11. Now, you need to set a few environmental variables. To help yourself out in the future, you will be placing everything necessary in your ~/.bash_profile file. What this will do is automatically setup the Amazon EC2 Command-Line Tools every time you start a Terminal session. Just open ~/.bash_profile in your text editor and add the following to the end of it:
# Setup Amazon EC2 Command-Line Tools
export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem`
export EC2_CERT=`ls $EC2_HOME/cert-*.pem`
12. As you made some changes to your ~/.bash_profile file, you will need to reload it for everything to take effect. Run this:
$ source ~/.bash_profile
Creating and Connecting to a Server Instance
Launching an EC2 Instance from the Command Line on Mac OS X
Now that your computer is set up to work with EC2, it is time to make your server instance.
1. Type this into the Terminal.
$ ec2-describe-images -o amazon
What does the -o option do?
The -o option stands for owner. In this example, you are asking EC2 to describe the images that belong Amazon. To see every image available, give the -a option instead.
2. After a short wait, you will be given a list of available images which should look something like this.
IMAGE ami-20b65349 ec2-public-images/fedora-core4-base.manifest.xml
IMAGE ami-22b6534b ec2-public-images/fedora-core4-mysql.manifest.xml
IMAGE ami-23b6534a ec2-public-images/fedora-core4-apache.manifest.xml
IMAGE ami-25b6534c ec2-public-images/fedora-core4-apache-mysql.manifest.xml
IMAGE ami-26b6534f ec2-public-images/developer-image.manifest.xml
IMAGE ami-2bb65342 ec2-public-images/getting-started.manifest.xml
IMAGE ami-36ff1a5f ec2-public-images/fedora-core6-base-x86_64.manifest.xml
IMAGE ami-bd9d78d4 ec2-public-images/demo-paid-AMI.manifest.xml
Note that you can also do something like
$ ec2-describe-instances -a > ami-list-2011-11.txt
and then search the generated text file for platforms you might need, such as magento or wordpress:
$ cat ami-list-2011-11.txt | grep magento
3. Lets create something simple for now, a Fedora Core 4 machine with Apache. To do this, we need to generate a keypair. This keypair will supply the credentials we need to SSH (Secure Shell) into our server instance. To make a new keypair named ec2-keypair, type the following:
$ ec2-add-keypair ec2-keypair
4. This will create a RSA Private Key and then output it to the screen. You are going to copy this entire key, including the —–BEGIN RSA PRIVATE KEY—– and —–END RSA PRIVATE KEY—– lines to the clipboard. Now, go into your ~/.ec2 directory, make a new file called ec2-keypair, open it in your text editor, paste the entire key and save it.
5. Next, it is important to change the permissions of your keypair file, or else EC2 will not let you connect to it via SSH. To do this, just type the following in your ~/.ec2 directory:
$ chmod 600 ec2-keypair
6. Time to create your new machine. Ensure you are in your ~/.ec2 directory and type the following, substituting “ami-23b6534a” with the id of the image you wish to create.
NOTE: It is important to understand that once you tell EC2 to start creating your server instance, you will start paying 10 cents every hour until you terminate it.
$ ec2-run-instances ami-23b6534a -k ec2-keypair
RESERVATION r-xxxxxxxx xxxxxxxxxxxx default
INSTANCE i-xxxxxxxx ami-23b6534a pending ec2-keypair
7. It may take a bit for EC2 to start your new machine, but you can always check its status by typing:
RESERVATION r-xxxxxxxx xxxxxxxxxxxx default
INSTANCE i-xxxxxxxx ami-23b6534a ec2.compute-1.amazonaws.com
8. Great, your instance is up and running. Take note of your server’s web address (ec2-xx-xxx-xx-xx.compute-1.amazonaws.com) and ID (i-xxxxxxxx) as you will need both of these later in this tutorial. If you forget them, you can always type the ec2-describe-instances command again. Now, lets prep our server by enabling port 22 for SSH access and port 80 so Apache can serve web pages.
$ ec2-authorize default -p 22
PERMISSION default ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0
$ ec2-authorize default -p 80
PERMISSION default ALLOWS tcp 80 80 FROM CIDR 0.0.0.0/0
9. This is the moment you have been waiting for, connecting to your new machine. Open a new web browser window and type in your instance’s web address. You should now see an Apache welcome page.
10. Fantastic, your instance is serving the Apache test page. Now, lets SSH into the machine and check it out. Ensure you are in your ~/.ec2 directory as you will need your ec2-keypair file.
$ ssh -i ec2-keypair firstname.lastname@example.org
11. SSH will ask you if you are sure you want to connect. Just enter yes and you should be connected to your server instance.
__| __|_ ) Rev: 2
_| ( /
Welcome to an EC2 Public Image
__ c __ /etc/ec2/release-notes.txt
Terminating Your Server Instance
Keep in mind that you are still on the meter. Because of this, you should shut down your server instance if you do not plan on using it.
1. Enter the terminate command with your server’s instance ID.
$ ec2-terminate-instances i-xxxxxxxx
INSTANCE i-xxxxxxxx running shutting-down
2. Take a look to see if everything is terminated.
RESERVATION r-xxxxxxxx xxxxxxxxxxxx default
INSTANCE i-xxxxxxxx ami-23b6534a terminated
3. Done and done.
Now that you have an intro to using Amazon EC2 instances on Mac OS X, in step 7 above, you installed the tools. Check out the Amazon AWS Command Line Tools API for all the various ways you can monitor your EC2 instances and other AWS services from the command line. Here are a few more resources:
- AWS EC2 API Documentation
- Finding a suitable AMI – Amazon Machine Image
- Generating a new SSH Key-pair
- Launching an Amazon EC2 Instance
- You can even Launch EC2 Instance via the web with an HTTP Query
Amazing stuff, and more affordable than you might think. See Reserved Instances.
When starting instances, be sure to take note of the Availability Zone you’re starting your instance in. If you end up creating more servers, for example, an Apache server, a MySQL server, a Memcache or Redis Server, you’ll want to make sure you start them all in the same availability zone to avoid unecessary charges and security group headaches. More about AWS Availability Zones and AWS Security Groups over at Rightscale.
Posted on November 4th, 2011 No comments
This is a slightly modified mirror of http://homepage.mac.com/kelleherk/iblog/C711669388/E351220100/index.html
OK, so you have a nice replication setup, but how do you know it is actually working, and what do you do when it stops? This short article shows how to check and quickly fix replication that has stopped. This procedure takes 2 minutes and can be done remotely on the command line.
To check if replication is working, log into the slave and execute:
> SHOW SLAVE STATUS;
The result is something like this:
(email@example.com) (none)> show slave status\G
*************************** 1. row ***************************
If either of these is NO, like this, then replication is stopped:
The Last_Errno and Last_Error might give you a clue as to what went wrong.
If all seems OK, you can also confirm further that everything is working by logging into the master and executing SHOW MASTER STATUS and comparing the binary log and exec position.
If a recovery is required, often, you can do a quick recovery by seeing the point at which the slave stopped and then simply resetting and restarting the slave at that point in the master binary logs. If this quick procedure fails, then you will have to perform the more time-consuming full copy from the master and restart replication like you did when you initially set it up.
Quick Reset Procedure
1) First, issue a STOP SLAVE
> STOP SLAVE;
2) Important: Next, issue a SHOW SLAVE STATUS and get the stopping point information
> SHOW SLAVE STATUS;
At this stage you must make note of the result of the SHOW SLAVE STATUS. If you don’t have this info on hand, you will not be able to complete the procedure. Usually I am using a terminal program and remotely accessing the server, so I always copy the result from the screen and paste it into a text editor on my machine.
The information we need from that result is as follows:
… and these are optional if using SSL …..
3) Next, issue a RESET SLAVE:
> RESET SLAVE;
4) Now we issue a CHANGE MASTER command, for example (substituting your own values of course):
> change master to
The last 4 master_ssl parameters are not required if not replicating over SSL.
5) Finally, start the slave:
> START SLAVE;
And check again with SHOW SLAVE STATUS to make sure we are replicating again.
Note: If you are getting repeated situations where replication is getting errors and stopping, then you need to reassess your setup. It is VERY important to have BOTH master and slave on Uninterruptable Power Supplies if that is not obvious! If you have recovered and still get errors, then a full recovery by getting a full dump from master and a scratch slave setup is necessary.
You can of course write some scripts to perform the slave running check every 5 minutes and email you if it has had an error and stopped replicating. Jeremy Zawodny in his book discusses ways to automate slave replication checking and alert you when replication has stopped or fallen too far behind.
What follows is a slightly edited mirror of http://homepage.mac.com/kelleherk/iblog/C711669388/E351220100/index.html
For the last year kelleherk had avoided this because kelleherk expected it would be hard. But replication is really is not that hard after all …. and it makes backing up very easy avoiding special scripts, sql dumps, etc. as well as providing peace of mind for unrecoverable hard drive failure of your master server knowing that you have a perfect recent if not exact copy of ALL databases on the slave. I was lucky enough to learn from some really amazing MySQL admins, and reading kelleherk’s post helped me remember how to do it.
Hardware/software scenario for these instructions was Apple XServes running OS X Server 10.3.4 (Darwin Unix version 7.4.0) and MySQL 4.0.20 standard binary installation. MySQL resides at /usr/local/mysql and the global my.cnf file is at /etc/my.cnf. I use the default (bash) shell.
The master has been running happily on its own dedicated XServe (serving mostly WebObjects applications) and needs a backup solution that takes an exact copy once per night of the master server without ever shutting down the master. Another XServe that acts as a fileserver has plenty of capacity to become a MySQL slave. All the commands on this post also work fine on MySQL 5.1 on Ubuntu Server.
These instructions involve shutting down the maser one time long enough to copy the contents of the mysql/data directory across the network to the slave. This was quick in my case since all the servers share the same gigabit subnet and the databases were not too large. You also need root privileges on both mysql and the servers themselves. All command line args beginning with # below signify that server root user is logged in. If not logged in as root, you need to constantly do sudo and enter password which adds unnecessary fluff to these instructions. But be careful ….. root has “no questions asked’ power!
IMPORTANT: This also assumes that /usr/local/mysql/bin is the leftmost path in your shell PATH variable. This is required to make sure your mysql commands work on the binary installation and not the “bundled” mysql that ships preinstalled in Darwin and NOT installed in /usr/local/mysql.
1) Preparing the slave
2) Prepare the master
3) Shut down the master MySQL
4) Copy the data directory from master to slave
5) Restart the master and verify the creation of a binary log
6) Finish configuring the slave
7) Start the slave and verify replication
Simply download the binary installer package and run the installer for mysql and then run the installer for the Startup Item. DO NOT configure or startup mysqld yet!
Login to slave as root
% su root (locally) or % ssh root @slave-ip-address (remotely)
If necessary, edit /etc/profile so that your PATH variable begins with /usr/local/mysql/bin and then log out and in again
Delete the mysql newly installed data directory since we will be copying over the master’s data directory. WARNING! You don’t necessarily need to use this method (faster, probably) unless you really know what you’re doing! You can follow the steps on this page for restarting the slave if it is not that far behind and just needs to catch up.
# cd /usr/local/mysql
# rm -r ./data
Decide right now on a special user and password for replication, let’s say repluser and replpassword.
Also decide right now on a special user and password for backup shutdown/startup, let’s say backuser and backpassword
(You can substitute your own passwords!)
Next prepare the slave config file
# pico /etc/my.cnf
Enter the following slave configuration parameters
# this default slave mysql user only has SHUTDOWN privilege allowing the backup script on the
# slave to shutdown mysqld without providing a username and password
user = backuser
password = backpassword
# I use the IP address of the server for server-id
#log-bin = /var/db/repl/binary-log
# Using last portion of this machines IP for server-id
server-id = 143
# This is the master details (NOTE master-host is MASTER IP address)
master-host = 192.168.1.241
master-user = repluser
master-password = replpassword
master-port = 3306
Next save file and close pico
[ctrl-o] and [ctrl-x]
…. and that’s it for now on the slave. Read below to finish with the slave setup.
Preparing the Master
Login to master as root. Let mysql server continue running for now.
Next create a directory owned by mysql user for storing the master binary log (we don’t want to have it in the default location of the data directory.
# cd /var/db
# mkdir repl
# chown -R mysql:wheel repl
Next update the master my.cnf file using pico text editor
# pico /etc/my.cnf
Now add these lines to the [mysqld] parameters
# This turns on binary logging and determines the pathname of the log
log-bin = /var/db/repl/binary-log
# server-id should be a unique id between 1 and 2^32 - 1
# I used the last portion of the IP address of this server
server-id = 241
Next save file and close pico
[ctrl-o] and [ctrl-x]
DO NOT restart the master mysqld yet! We want this my.cnf to be read only after we stop and copy the master data to the slave so that replication begins on identical copies of the databases.
Now log into mysql to add the repluser and backuser. Note that while we are creating these two users on the master, they will really be used on the slave ….. but remember that the master will soon be copied to the slave just before we begin replicating and these users and privileges will be mirrored on the slave after we copy over.
Note the following GRANT statements assume your subnet with the mysql servers have IP addresses beginning with 192.168.1. Change as appropriate for your situation.
# mysql -u root -p
mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'repluser'@'192.168.1.%' IDENTIFIED BY 'replpassword';
mysql> GRANT SHUTDOWN ON *.* TO 'backuser'@'192.168.1.%' IDENTIFIED BY 'backpassword';
Check connections and decide when to shutdown the mysql server
mysql> SHOW PROCESSLIST;
Shutting Down the Master
When ready to shutdown…
# mysqladmin -u root -p shutdown
Copying the mysql data directory to the slave
When mysqld has ended we will use scp to copy the data folder to the slave
# scp -r /usr/local/mysql/data root@slave-ip-address:/usr/local/mysql
When finished copying we can restart the master. And don’t worry about the slave which is still not started. If the master is binary logging after we restart, the slave will read the log and catch up to synchronize.
Restarting the Master
# mysqld_safe &
Press return key.
Now check if binary logging is working
# cd /var/db/repl
# ls -al
You should see a file named binary-log.001 …. if not you have got to troubleshoot it and fix it and then delete the slave data directory and shutdown the master and copy over the data directory again before restarting. The only problem I had when I first did this was that I had a binary log name in my cnf file that mysql just did not like, so initially use “binary-log” which is sure to work.
drwxr-xr-x 5 mysql wheel 170 24 Jun 09:32 .
drwxr-xr-x 23 root wheel 782 24 Jun 09:32 ..
-rw-rw---- 1 mysql wheel 20041 23 Jun 10:35 binary-log.001
-rw-rw---- 1 mysql wheel 56 24 Jun 09:33 binary-log.index
You can examine the file like this:
# mysqlbinlog binary-log.001
If you wish log into mysql and create a test database, add a table and add a record. then log out and examine the binary log and you will see the SQL commands in there ready for the slave to execute.
If binary logging is working its time to finish with the slave
Finish configuring the slave
First fix privileges on the data folder that we copied over
# cd /usr/local/mysql
# chown -R mysql:wheel data
Verify privileges if you wish…
# ls -al ./data
Now start the slave…
# mysqld_safe &
When the slave has started, log into it and check that the test SQL stuff you did on the master has replicated. BUT DO NOT run SQL statements on the slave yourself that would jeopardize the integrity of the slave being an exact copy. If you wish create a read only user on the MASTER and then log into the slave using the read only user to verify replication.
In addition you can go into the /usr/local/mysql/data directory and you will see the relay log. Also check out the online MASTER and SLAVE SQL commands for checking status etc.
If it’s working then congratulations! if not …. then google it.
Posted on October 27th, 2011 No comments
Redis is a powerful memory-resident data store.
In order to give you a good background on what Redis is, let’s take a trip down memory lane, back to our first lesson on how computers work. Computers have a hard disk drive, which enables data and files to be stored permanently (or, at least, for very long periods of time). They also have memory, or RAM, which is volatile, but very, very fast, comparatively.
Why do we need both hard-drives and memory? Economics, mainly. Cheap, huge hard drives can store millions of files permanently, even if the power goes out. The tradeoff to all that massive amount of permanent storage is that hard drives are slow. RAM, on the other hand, is volatile storage, meaning that whatever is in RAM is lost when you power down your computer. Data in RAM is blazing fast, though. The trade-offs for all that blazing speed are high cost and impermanence.
The latest Solid State Disks (SSD’s) are breaking some of these rules by being decent-sized, permanent storage that’s faster and quieter than a regular hard drive, but not as expensive as RAM. SSDs are more like hard drives or memory sticks with no moving parts. What’s Cool About SSDs would make a good blog post in the near future.
Back to caching. In general terms, a cache is simply a faster place to retrieve data from.
Your computer can access data stored in RAM much, much faster than it can access files on disk. When you launch a program, your computer is reading the application data from disk (slow) and loading it into memory (fast) so you can work on it.
Several forms of caching are used to speed up surfing the web.
You’re probably most familiar with client-side caching—your browser cache, or Temporary Internet Files for you Windows users. Web browsers use a local cache (just a special hidden folder) on your hard drive to store the stuff you’ve downloaded before. The theory here is, if you’ve already downloaded the home page of a site, chances are most of the files, scripts and stylesheets are reused across other pages on the stite, so by keeping local copy in the cache, your browser doesn’t have to re-fetch assets from the server again.
Server-side caching, on the other hand, is a technique implemented by web application architects, to help speed up web applications. Some of the things that can be cached on the server-side include frequently-used files, query results, or processing PHP templates into their rendered HTML. With server-side caching, the idea is to alleviate some or most of the work your web server has to do when processing a particular request.
Going back to our lesson on how computers work, we know hard drives are slow and memory is very fast. So when talking about maximum performance for web servers, we had better be looking at storing data in memory.
Two of the most popular memory-caching platforms for PHP are Memcache and APC, the Alternative PHP Cache. These PHP extensions have been around a very long time, and are able to utilize RAM, instead of files on disk, to make data available to your PHP scripts almost instantly.
Memcache and APC are known as key-value stores. Memcache is only a key-value store, meaning that’s basically the only feature it offers. APC is a key-value store, and also an opcode cache. An opcode cache actually compiles PHP scripts into machine-executable code, very much like compiling C# into a .DLL or Java code into a .war file. Compiled code runs faster than code that has to be parsed first.
With any key-value store, you hand it a key and some data, and then later, you can lookup the data again almost instantly by using the original key.
So why Redis? Like Memcache, Redis is also a key-value store, but it also offers several unique, blazing-fast data structures, including hashes, sets, lists and more. In addition, Redis includes a terrific write-to-disk feature as well as master-slave replication, giving you an extremely flexible and powerful set of tools, that fits in perfectly with any database-driven web application. And all for the amazing low-low price of only $0.00! By combining APC’s opcode cache capabilities, with the flexibility of Redis, you have everything you need to make your web applications really scream.
Whether you’ve never used a caching platform before or you’re already fluent in Memcache or APC for high-performance website scalability, come and learn why Redis is quite possibly the best thing since sliced bread.
Posted on October 25th, 2011 1 comment
Magic sauce, so easy to misplace:
The string, `netbeans-xdebug` is configurable inside NetBeans preferences.