Thursday, December 1, 2011

Web App Framework Review: Sencha Touch, jQuery Mobile & more

I was recently evaluating the different frameworks for developing mobile web apps. I created an app in Sencha Touch, jQuery Mobile and JQTouch.(I also did a responsive web site using media queries, but this isn't so much a framework.) Here is my summary.

Sencha Touch is very different than anything I have used before. All of the content of the web app is built with JS. You code everything with JS. This causes things to be a bit janky, especially on older smart phones that need more time to process all that script. The biggest drawback, however, is that the online documentation is very difficult to use, especially for newbies. They have a few video tutorials online, but some are outdated and many are inconsistent (some use MVC architecture, for instance, which really throws you off if every other tutorial doesn't). All-in-all, if you are a web developer by trade and are used to manipulating DOM-level elements, this is a very frustrating framework to try to learn. Be ready to spend a serious amount of time on the learning curve. (Also, you have to pay Sencha if you want the latest and greatest version of their framework.) After attempting a few different projects with Sencha Touch over months, I grew frustrated and threw in the towel, even outsourcing the remainder of one project to India.

jQuery Mobile is well documented and has plenty of example code for you to follow along. You develop your web app as you would a website and use different data-xxxx attributes on HTML objects to describe how objects should be styled or used. Once you have your structure, jQuery Mobile takes care of the rest, making everything look nice and employing fancy animated screen transitions. There is a small learning curve when getting started, nothing compared to Sencha Touch. And last but not least, jQuery Mobile is FOSS and appears to be in very active development. I was very happy with the example projects I put together with jQuery Mobile. They were easy to finish and worked well.

JQTouch, when I evaluated it, didn't appear to be in active development. The documentation was a little sketch and I found the demo/example code and the documentation disagreed on how things should be implemented. It took some trial and error to find out which methods were correct. I wouldn't recommend this framework for development unless they have taken great strides in recent months.

I want to mention responsive web design because it has great potential. While it doesn't give you the web app feel and isn't really a web app framework, it is definitely a good way to go if you need to develop a mobile web site. The pages load faster than they would using a web app framework (there is much less styling and JavaScript in the background). On the flip side, there are no fancy transitions, but that can be a good thing. You are in control of all the styling and the size of all the UI elements. Plus when designed right, it works on screens of all sizes -- there is no need for special mobile sub-domains or redirects.

In the end, no web app framework is perfect and completely replicates the native experience -- that hasn't happened, yet. With this said, I recommend jQuery Mobile and responsive web design as my favorites. They are different solutions for different problems (jQuery Mobile for a native-app experience, responsive for informational sites), but I would strongly suggest evaluating them if you have a new mobile web project coming up.

Wednesday, November 30, 2011

Hebrew Flash Card Web App

While evaluating different web app frameworks, I put together a little web app to help me get back up to speed with my biblical Hebrew vocab. And if you know anything about trying to read the Hebrew Bible, knowing a good amount of vocab is (unfortunately) essential. The flash card app is built using Sencha Touch's framework and is at ... check it out on your smartphone or iPod touch. The chapter divisions are based on Basics of Biblical Hebrew by Pratico and Van Pelt (Zondervan) so it is a great free vocab tool for students using this textbook.

Thursday, November 17, 2011

Partition and Mount a Drive on CentOS

I have a few servers over at Softlayer. I recently procured a monthly computing instance with an additional 200 gigs of drive space. Thing is, the extra HD space doesn't come partitioned, formated or mounted. So, here is what you do:

First things first. Find the name of the physical drive: fdisk -l

This command will return a list of drives and information about each. You should find one that hasn't been partitioned with a name like /dev/sdb (or in my case /dev/xvdc for a computing instance) or similar. Check the size of the drive to make sure it is what you are looking for.

Next, partition the drive: fdisk /dev/xvdc (using the name of the drive, of course)

Once in the fdisk utility, press p to print the partitions to the screen. There should be none, because you haven't created any yet. If there are, are you sure you are using the right drive?

Next press n to create a new partition, press p for a primary partition, 1 for the first partition, and then use the default first and last cylinder (unless you know what you are doing, of course). Once this is set up, you can press p to make sure it worked and then finally, and most importantly, press w to write the changes to disk.

In the previous command, you'll get a slightly different name for the partition you created. It probably added a letter, something like /dev/sdb1 or /dev/xvdc1 would be right. You'll need it to format the drive: mkfs -t ext3 /dev/xvdc1 (I've used the ext3 file system, here, because it suited me just fine and is probably the most common.)

Now, where would like the disk mounted? You'll have to create a folder as the mount point. If you want the disk to be used for a new /data directory, then you'll have to create a folder with this name: mkdir /data

Next, add the drive and mount point to the /etc/fstab file so that it will be mounted at boot time. Use your favorite text editor, as they say. I used vi /etc/fstab and I added a new row, matching the spacing of the other rows: /dev/xvdc1   /data    ext3   defaults   1 2

Lastly, you mount the drive using mount /dev/xvdc1 and you are all done.

Head on over to cd /data and check it out. Or run mount without parameters to check out the details.

PS. Here are a couple websites I used when I was working through this problem myself.

Wednesday, November 2, 2011

Synchronize Facebook Status & Twitter

There is so much unhelpful junk out there on the web when you do a Google search on synchronizing your Facebook status' with Twitter. So, a quick post.

If you want your Facebook status to auto-tweet, go to

If you want your Tweets to auto-Facebook-status-update (why is there no verb for this?), go to

It's just a slight change in the sub-domain. You don't need any 3rd part software or other service. Facebook and Twitter worked this out among themselves already.

Monday, October 3, 2011

jQuery Mobile Date Picker

I've been working with jQuery Mobile on a new form. I have the need for a datepicker and was really impressed with what a man by the name of jstage has put together. It is really thurough. So, if you are looking a flexible, mobile date picker, this is a good one.

Wednesday, September 28, 2011

2 reasons I dislike Google's new look

Google has been rolling out a new look lately. They are going with a very simple gray and red scheme (and orange in blogger?) for most of their products. I like the idea of it. I also think it is very attractive. But I also have 2 problems with it. And they both have to do with the usability of the new design.

1) They removed all but two visual cues. They've oversimplified it. When our eyes scan pages they pick up visual cues as to what is important and where to find things. It is very quick and very subconcious. Google used to have a lot of visual cues around the page with their old, colorful look. Now, they have red buttons, blue search buttons and everything else is gray. The red indicates something important. Everything else, they are saying, is not important. The problem is that is WAY over simplified. The fact is that this binary visual cue system (that is, red or gray) really slows me down. It is hard to find things quickly or to even orient myself on a page. I even have to think more when I try to figure out what Google service I am using. Their new look doesn't help my brain instantly digest a page. It slows me down because it is ultimately camouflaging the entire page of non-red text links and text interactions in a sea of gray.

2) The new look also makes lots of space for the top of the page. The Google search bar is prominent and then the actions. Then the rest of the page, the area I spend most of my time working and interacting with important data (whether it is a calendar, email list or voicemail transcripts), is now smaller. Not a lot smaller but enough that it really is annoying. This area should be bigger if they are going to change things, not smaller. This is the most important part of the page. The search bar does not need 1/4 of the site. Search is a single input and I spend only a fraction of my time there.

Friday, September 16, 2011

WordPress - Template for Parent Slug

WordPress allows you to create special template files for each category. You can name these theme template files cateogry-ID.php or category-SLUG.php, which is nice. However, if you are viewing a category and it does not have its own template, WordPress doesn't look for a parent template file. It just skips that. In lui of creating a template file for every child category (which would be a real pain), I found a nice script that I've edited and improved to provide expanded support (it didn't support slugs in the template name).

 Place this in your functions.php file:
// Use a parent category slug if it exists
function child_force_category_template($template) {
	$cat = get_query_var('cat');
	$category = get_category($cat);

	if ( file_exists(TEMPLATEPATH . '/category-' . $category->cat_ID . '.php') ) {
		$cat_template = TEMPLATEPATH . '/category-' . $category ->cat_ID . '.php';
	} elseif ( file_exists(TEMPLATEPATH . '/category-' . $category->slug . '.php') ) {
		$cat_template = TEMPLATEPATH . '/category-' . $category ->slug . '.php';
	} elseif ( file_exists(TEMPLATEPATH . '/category-' . $category->category_parent . '.php') ) {
		$cat_template = TEMPLATEPATH . '/category-' . $category->category_parent . '.php';
	} else {
		// Get Parent Slug
		$cat_parent = get_category($category->category_parent);
		if ( file_exists(TEMPLATEPATH . '/category-' . $cat_parent->slug . '.php') ) {
			$cat_template = TEMPLATEPATH . '/category-' . $cat_parent->slug . '.php';
		} else {
			$cat_template = $template;
	return $cat_template;
add_action('category_template', 'child_force_category_template');

Monday, August 29, 2011

JavaScript Array Scrambling

Ever need to randomly re-order an array in JavaScript? I have. And, unfortunately, there is no native support for a function like shuffle() in PHP.

I found a few people suggest this technique, but I have found it to not be very good. The elements weren't all that random especially for the first and last elements in particular (they were usually one of two values).

myArray.sort( function(a, b) {
return Math.round((Math.random() * 100) - 50);

It uses the native sort() function requires a function that will compare two values to determine which one comes first. If the number returned is negative it indicates that the 'a' item goes first and vice-versa if it is positive.

I found the following function to be more random:

function mixArray(arrayIn) {
var arrayOut = [];
var origLength = arrayIn.length;
for (var x = 0; x < origLength; x++) {
var randIndex = Math.floor(Math.random() * arrayIn.length);
if (randIndex == arrayIn.length) randIndex--;
arrayOut.push(arrayIn.splice(randIndex, 1)[0]);
return arrayOut;
myArray = mixArray(myArray);

Saturday, August 13, 2011

CentOS and "su: incorrect password"

I made a mistake. I was setting up CentOS and accidentily ran chown -R 0 /. Yep, a global & recursive ownership change. Not a good thing.

The first problem I noticed was that I couldn't use su -. It kept saying that I was using the "incorrect password" even though I was completely sure I was.

I after digging around, I found someone who had the same problem and a nice chap named Mike who, very impressively, pointed out the source of the problem (ie, the globaly recursive chown). His response: "He's got rather a mess, unfortunately." Uh oh.

Well, it seems that the mess wasn't as bad as I thought. After fretting and digging around the interwebs, I came across this article about resetting file permissions on RHEL (and CentOS by extension, of course). Hallelujah!!

Following the instructions there, I did this to fix my woes:

rpm --setperms -a

I am not convinced everything is back in its proper place and I will probably see more fallout from my careless command executing. BUT, I can successfully "su -" without any bogus password complaints. That I like.

Monday, August 8, 2011

Allstate's Roadside "Assistance"

UPDATE 8/9/11: Allstate tracked me down to make things right. They are refunding my payment and explained that things don't normally go this way. What an impressive response.

It should be called "Allstate's Roadside Torture". And, I should have just googled "tow truck" ... I would have saved a lot of wasted time. I need to rant, badly.

My wife is pregnant and stuck in a parking lot. She calls the number that Allstate gives us for roadside assistance and can't even get past the first robo-bot operator. So she calls me for help. So, I call Allstate's well-promoted "Good Hands Roadside Assistance"... this is how it went.

I call the phone number the publicize for the service (1-800-255-7828). It is answered by a machine that wants me to speak what I want. So I say "roadside assistance." "You want coporate?" replies the machine. No, "I want help". Not working, so it offers me a directory and of course there is no roadside assitance options. After some very unfruitful back and forth with this annoying bot, it sends me to an operator.

The nice foreign operator would like to validate my policy. He needs my name. Then my SSN last four. Then he needs to validate every other peice of information he can find. Gotta check my zip and address and phone and email address. Seriously!? I need help. And these are the things you want to ask me about?

When he finally finishes with his 20 questions, THEN he tells me I need to call another number. Call 1-877-248-1266. Great ok. Thanks for that.

Now I am on to this new number. And a new game. First I get a message about adverse weather conditions in my area... I live near Seattle and it is the middle of the summer. What adverse weather conditions? You mean the 70 degrees and overcast? Is this causing havoc on your systems? Now after explaining that I need help to a new telephone operator, he also tells me I need to call a different number. This is a joke, right? Nope, I have to call a new number: 1-877-266-7561.

On to the third operator. Now I am having fun. Now I need to give him my credit card, phone number, email address, home address, zip code and whatever else he could come up with. After 10 minutes of divulging personal information he can finally could get me assistance. He puts me on hold and then finds someone that can help within the next 50 minutes. 50 minutes?! I am in the middle of a urban shopping center and 50 minutes is their timeline? I am not in the mountains or in some rural community, I could understand, then. I guess it is the best they can do because of all this crazy weather (sarcasm).

I will NEVER use Allstate's roadside service again. EVER. And Allstate needs to try their own service before they sell it.

Friday, August 5, 2011

SE Linux and "Can't connect to MySQL server"

I've been setting up a server for the last few days for production. It is to be highly secure, so it is a real pain. Anyway, I ran into this problem where I could not connect to the remote database. It was driving me nuts. (I'm using CentOS by the way, and I was setting up a web server to connect to a remote MySQL database server.)

First, I checked that I was getting network connectivity by doing

telnet 3306

from the command line (where is the IP of the database server). And, I got some gobbledygook that had the words "MySQL" in it, so I know that worked and the network was there.

Then, I checked the database's user list, to make sure connections were allowed from the user at my host. I did this by going into MySQL's CLI and entering

SELECT `Host`,`User` FROM mysql.user WHERE 1;

This printed out a list of all the users so I just made sure it and the host were correct. (There are more MySQL troubleshooting tips here.)

Then it finally hit me. What always causes problems for hours on end? SE Linux!! I know it is good for my server, but it sure does drive me nuts sometimes. If you want your web server to be able to connect to a remote DB, you have to give it rights with SE Linux. Run this at the command line:

setsebool -P httpd_can_network_connect_db on

and for good measure:

setsebool -P httpd_can_network_connect on

There, all fixed!

Update: HA HA! I've had this problem before and completely forgot about it.

Monday, August 1, 2011

PHP UTF-8 Script Input Cleaner

When you switch to UTF-8 on your website, there is a few things that everyone recommends (for good reason) like using multi-byte functions (e.g., mb_strlen()) and adding the HTTP-EQUIV attribute header. One of these commonly recommended things is that you should clean up all user-submitted input.

With UTF-8 comes the ability to submit a lot of crazy characters to the script, either by POST or GET. These crazy characters might be control characters, invalid UTF-8 characters or some other charset that was mixed in for good measure. So, I created the following function to help clean my inputs:


function cleanUTF8(&$input, $stripSlashes = true) {
 if ($stripSlashes) $stripSlashes = get_magic_quotes_gpc();
 if (is_array($input)) 
  foreach ($input as $k => $v) cleanUTF8($input[$k], $stripSlashes);
 else {
  if ($stripSlashes) $input = stripslashes($input);
  $input = mb_convert_encoding($input, "UTF-8", "UTF-8");
  $input = preg_replace('!\p{C}!u', '', $input);


It is a recursive function in that it will iterate into a variable if arrays exist, as they sometimes do. It will also strip slashes for you if your version of PHP still has magic quotes on.

It removes invalid characters through the mb_convert_encoding() function. Anything that is not UTF-8 is dropped. Lastly, the fancy preg_replace() function removes all control characters. (The \p{C} means "all control characters", the !! are delimiters the same as // or ##, and the last "u" modifier means "treat this as UTF-8.")

At the top of your script add this to iterate over the input array and clean up the data:




Wednesday, June 15, 2011

HTML5 Video File Conversions

I think the best part of HTML5 Video is getting to convert the source file into a billion different formats for all the browsers (note: sarcasm). For a while I was struggling with Adobe's Media Encoder, trying to get that to do my work. It never really rose to the challenge.

Recently I've been using Adobe just for the FLV/FLA files (for flash fall-back) and then using Miro's video converter for the rest. It works a lot better, but is still not fool proof. I've found that the iPad and iPhone formats are still a little hit or miss when I test on the various iOS versions out there. What I like about Miro, though, is that I don't need to tune every parameter for the mp4. All I do is pick iPad and it takes care of the rest. Beautiful.

Another option is zencoder which has a nice API you can use. This would be a great option if you had to convert uploaded video or if you had a ton of video and you wanted to programmatically convert it. Just wanted to throw that out there.

Saturday, June 4, 2011

iPhone, HTML5 Video & AWS: Movie format unsupported

I've been working hard on a website that will have a video section using HTML5 video with a flash video fall back. It has taken way longer than I ever expected. What a mess this has become, trying to support every browser's own format, and then trying to support old browsers on top of that. Yikes.

One particularly troubling part has been the iPhone/iPod Touch support for video. Using multiple video encoders (Miro, QuickTime Pro, and even Sony Vegas HD) I was unable to post any video on the web that did not cause the iPhone to say "This movie format is unsupported." And it was crap, because I knew it WAS supported. And, even worse, the video would sometimes work... randomly.

Well, after lots of pain and banging my head against the wall, I found it out it was because I was using HTTPS on Cloudfront. For some reason, iOS occasionally rejects the Amazon Cloudfront SSL/TLS. Nice.

So, just take the S off of the HTTPS when playing HTML5 video served from Cloudfront.

Wednesday, June 1, 2011

Flowplayer Flash Video and SetReturnValue

I recently set up a HTML5 video system with flash video fallback, for those crazy people with old browsers. is a great site for explaining how to do this. Use this with the site with the info here for a great solution.

Anyway, the flash fall back using Flowplayer commercial was giving me a bunch of issues on IE7 & IE8. One was there was an error that was being logged in the console about 'SetReturnValue' not being defined. Turns out, the <object> tag must have an ID. The example code I copied from VideoJs's front page didn't have this. So, make sure you have a unique id set.

The other issue had to do with Flowplayer's branding removal (with a commercial license). Turns out, the registered domain name is not for the domain the video is being played on, but the domain the swf file is loaded from. I had the swf files on a CDN and the branding was not removed. Just had to move it back to the main site and all was good.

Tuesday, April 5, 2011

How to fix MySql Replication Error 1236

We have some websites that use two database servers in master-master replication. Recently one of the servers died and had to be resurrected. (They are cloud based and SoftLayer doesn't seem to have their cloud-based offering thing nailed down, yet. Every week one of them will go down and be unresponsive until tech support does some magic.)

After one of the servers was brought back up the other server would not connect. It "Slave_IO_Running: No" and "Seconds_Behind_Master: null" which means it was not playing nicely.

First, I went to the MySql log files, which for this server is found at /var/log/mysql.log and looked at the last few messages here by running "tail /var/log/mysql.log" from the command prompt. This revealed the error number (server_errno=1236). It also had the following info:
Got fatal error 1236: 'Client requested master to start replication from impossible position' from master when reading data from binary log

Just before this entry in the mysql.log it indicates the file and position that it is trying to read. So, with this data in hand, I headed over to the master server and took a look at the bin logs. They are located in /var/lib/mysql/. Here I took a look at the file in question using the mysqlbinlog utility. I used the following command to check out the bin log. Obviously, you'll have to replace the position and file name with the position and file indicated in your mysql.log.
mysqlbinlog --offset=128694 mysql-bin.000013

And, this is what I saw here, among other things:
Warning: this binlog was not closed properly. Most probably mysqld crashed writing it.

Well, that explains things! When the server crashed the bin log was not closed properly. This is easy to fix. Going back to the slave server, I stopped the slave, reset the bin log position and started the server.
CHANGE MASTER TO MASTER_LOG_FILE = 'mysql-bin.000014';

I simply pointed the slave to the start of the next bin log. It started right up with no problem.

Update 5/21/13 - An anonymous commentor made a good point about potentially losing data using the above technique. If you are in a situation like I am (master-master replication with 100's of gigs of non-critical data) this is really the only way of getting back up without significant down time. But if you are in a master-slave configuration with a manageable data set or its critical you slave doesn't miss any data, you should probably dump the master database and re-create the slave database to make sure you didn't miss anything.

Saturday, March 12, 2011

SimpleXMLElement and EntityRef XML parser

I am using the PHP class SimpleXMLElement to take care of parsing some XML data that I am sourcing from 3rd parties. It had been working well for a while, but I just discovered an error that was popping up frequently. This error was "XML parser error : EntityRef: expecting ';'".

This error comes about as a result of XML input data being improperly encoded. Two data sources I was using had encoded things like "&", "<" and ">" by leaving off the semi-colon. In other words, the ampersand had been encoded as "&" instead of "&". SimpleXMLElement doesn't like this and throws a warning fest.

To fix the problem, I added a line before calling SimpleXMLElement:
$xmldata = preg_replace('/&(amp|lt|gt)([^;])/', '&$1;$2', $xmldata);
$obj_xml = new SimpleXMLElement($xmldata);

The preg_replace fixes the encoding problem and adds the ampersand for you. Just a note, this will only fix the encoding for the three characters I specified above ("&", "<" and ">"). If there are others that are causing problems, you'll have to add them to the first argument of preg_replace().

Here is another blog/article that helped me discover the underlying issue.

Wednesday, March 2, 2011

Computer Backups

I use Backblaze to back-up my computer's files. Let me explain why...

It is always important to have your files backed-up. And to have those files backed-up in a location that is secure. That is, away from your computer and locked and/or encrypted. There are a few big players in the online backup service sector. Carbonite and Mozy are both good examples.

It is important to have all my files backed up, especially on my work computer. If something were to happen, like fire or theft, I need to have a backup copy of all my files so I can get back to work as soon as possible. The great thing about online back-up services is that your back-up happens automatically, whenever you have an internet connection. The more automated the better!

I've been using Carbonite for the past couple of years for my laptop. It has worked pretty good. I like how they add a little icon in your file explorer on top of each folder and file that is backed up. It is a great visual cue that helps me to see exactly what is backed up and what is not.

HOWEVER, over the past few months my computer fan will turn on really loud, even when I am not using the computer. Looking closer I found that Carbonite's software was eating around 50% of my processor, causing my computer to heat up and the fan to turn on. As soon as I disabled Carbonite, the CPU utilization fell and the fan turned off. That's not right!

Well, I finally broke down and contacted their technical support. First, their technical support sucks. They have a crumby web interface, their emails are formated all funny, and the people on the other end take forever to get back to you. But, the worst part is that after I jumped through their hoops and they finally got back to me, all they said is that I use my computer "too much." Seriously!? It runs at 50% even when I am not using my computer.

What a waste of my time.

SO, now I am using Backblaze. I heard about them a while back through this Slashdot article about how they had made their hardware design public. Very cool: open-source hardware. Now that's an innovative company. And, as a double bonus, they will encrypt my files in a way that is extra safe. Only I can open my files. (I have a feeling those tech guys are Carbonite can browse my personal files whenever they'd like.) Last, but not least, when I need to restore my backed-up files, they will mail me a physical disk with all my files on it. Awesome!

And, I must mention this because I am a web designer, the Backblaze website is WAY better looking. Nice work, guys!

Thursday, February 24, 2011

Storing Passwords in Databases

In the last few weeks/months there have been a couple high-profile computer breaches in the news. One was at Plenty of Fish, an online dating website. The second was at HBGary, a computer security company (ah, the irony).

Plenty of Fish made a major security mistake. They stored passwords in their database in plain text. This means, if you had access to their database (legitimately or through a SQL injection vulnerability) you could see anyone's password. And this is typically a big security no, no. Passwords should always be hashed before being stored in a database.

Hashing is a one-way encryption that prevents a password (or any character string) from being un-encrypted. So, when a user logs in, you hash their entered password and compare it against the hashed password in the database. If they match, the password is the same. It is simple AND safe.

Two particular hashes are quite popular. SHA1 and MD5. Although they are beginning to be a bit dated now and, if I remember correctly, MD5 has been shown to have some vulnerabilities. Some of the more recent hashes are SHA256 and SHA512. (Use the hash function to implement them in PHP.)

Over at HBGary, the "security" firm, they actually did use hashes to store passwords. They used the common MD5 hashing algorithm. The problem with their implementation, is that they added nothing to the password before hashing it. This is an issue because people have created massive online collections of auto-generated hashes from between 1 and 12 characters (typically). So, if you have a hash, and you're a hacker, you look up the hash in this table and (if it is a normal-sized password) it will likely be found.

The best way to make this look-up table irrelevant is to add a set of constant, known characters to the password before the hashing takes place. This technique, called adding "salt" to the password, will create an extra long password, that will never show up in a look-up table. Why? Because it would take more than a lifetime to compute look-up tables that long. Hashing is an expensive operation, in terms of CPU cycles, and the longer the original text, the longer it takes to compute.

If you are building a website, and you have passwords to store, remember: salt & hash. Mmmm, sounds tasty!

Friday, February 18, 2011

Mac crash / Mac blue screen flash during use (not at startup)

A few days ago my MacBook Pro (running OSX 10.6.6) starting having some weird issues. I would be working on the computer, and all of a sudden the screen would flash blue, and then a few seconds later show my blank desktop background, and then finally the dock would come back. All within a few seconds. All of my applications would be closed as if it had restarted.

This has happened about five times, all when I was using different programs. The first time it happened, it was after I dragged a file to trash; after that it happened while using iPhoto and it had frozen up on me; and another time while using Safari. Most recently, the blue screen flash occurred after waking the computer from sleep and trying to close the browser windows in Google Chrome.

I've researched a lot of Mac forums and apparently this is a common problem, and I saw posts from 2007 that were never resolved; I still haven't found a forum with a clear answer.

Anyways, I reviewed the console from the crash this morning, and saw that there was quite a bit of activity all occurring around the time of the blue screen flash, like "windowserver port death" and a whole lot of errors and warnings. This narrowed it down to a possible problem with the window server. I took it in to the Apple store nearby and spoke with a Mac Genius. I showed him the console with the errors, and he suspected it might be a permissions error causing the system to crash and then reboot (looks like it is logging out and then re-logging in at rapid pace).

The Mac Genius attached an external hard drive and rebooted from that and ran permissions repair. He recommended doing this every 3 months or so since it is easy to corrupt permissions. I had never done this before, and it took about 12 minutes. He also said he would recommend doing this at home by booting from the Install DVD I got with the computer. To do this, you have to press the 'option' key during startup in order to have the option to boot from the DVD. (I guess this is a change from pressing "C" with older models.)

Here's how to reboot and repair permissions from the OSX install DVD:

  • Insert install DVD, restart computer, and hold down the 'option' key during startup.
  • This will give the options of booting from the hard drive or the DVD – select DVD
  • The reboot could take up to 5 minutes or so.
  • When install screen comes up, go to "utilities" in the top menu bar and select "Disk Utility."
  • Click on the main hard drive (not the subfolder/partition), and click on "repair disk permission" button, allowing the process to complete. This could take a while, especially if never done before. (It could take up to 3 hours, but mine took 12 minutes.)
  • Quit Mac OSX installer from the menu bar and restart.
We were thinking this should clear up the problem, but if not, the second resort is to archive & reinstall OSX (archive will save all your data), and a third resort is to erase and install (after full backup of all data, of course). We'll see what happens! Hopefully it will work. That would be nice, since it is rather annoying to get the blue screen flash in the middle of doing something, and loosing any data you were working on.  

Update 5/17/11
Well, after many months, many disk permissions repairs and many more blue screen crashes, we decided to go back to the Genius Bar. It was evident that the problem was not fixed. When we returned they ran a test on the hard drive. It apparently failed the test to some degree because they replaced the hard drive. Perhaps it was a hardware problem all along? We'll see.

-- This is a guest post by my wife Susan

Tuesday, February 15, 2011


This can be a fun combo to work with. SELinux, or Security Enhanced Linux, is the life of any party. And Google searches about SELinux-related problems makes it pretty evident that very few people have taken the time to understand how this program works. I ran across numerous people simply suggesting that you turn off SELinux if it is getting in the way.

Well, I didn't buy into this wholesale approach to getting things to work. Besides, I want a secure system and if SELinux is going to help me in the long run, I want it enabled.

VSFTPD, or Very Secure FTP Daemon, is a pretty standard FTP server. You can install it on CentOS systems (or RHEL, for that matter) by running "yum install vsftpd" from the command line. Once you get it installed you can make changes to the configuration file at "/etc/vsftpd/vsftpd.conf" using a file editor. Also, to make sure it is always running in the background run "chkconfig vsftpd on" and "service vsftpd restart" from the command line.

Make sure the configuration file is set up properly. Only give FTP access to a limited number of users (never include root), disable anonymous access and make sure users can only access the files they need (I recommend chroot-ing them).

This is only half the battle. Now that VSFTPD is running, how do you allow users to access their home directories? SELinux usually will get in the way of this. Well, there is a SELinux setting for this called "ftp_home_dir" and this will allow users to access their home directory via FTP. To set this, from the command line run
setsebool -P ftp_home_dir 1

Be sure to also check file permissions and file ownership, if you run into problems. A file must be writable for everyone if they do not specifically own it.

If this fails to grant you FTP access where you need it, or your set-up is slightly different, you can always allow the FTP daemon full access to all files by running
setsebool -P allow_ftpd_full_access 1

This is granting a bit more power to the FTP daemon than is necessary, but it is much better than just disabling SELinux all-together.

By the way, here is a great intro to CentOS & SELinux.

Wednesday, February 9, 2011

Calculating Popularity

I am working on a new site that will have a sort of ranking system. It will be used to list a series of resources that visitors can rate and score. This scoring will then drive what resources are listed first and what resources are listed last.

Well, how in the world do you calculate ratings? I want to make sure new sites get a chance to rank high (so that old sites don't stay at the top forever). I also want to make sure that sites with a low number of votes are put at the top, just because all of the 2 people voted the max. And the list goes on, the more you think about it, the more there is to it.

Well, this blog entry does a great job of explaining different ranking algorithims and how they work. I give it a great score and a high rank. ;)

Thursday, January 13, 2011

Database connection (mysql_connect) taking a long time

After setting up a new database, the connection from the web servers was really slow. We had enough combined traffic to slow down the response of the PHP function mysql_connect() to between 5 and 20 seconds. But the current load on the MySQL server wasn't that high... something fishy was going on.

What I found was that the database server was trying to do a reverse look-up on every connection using the IP. It was slowing down every connection for this. You can disable this by using the following option, as explained here.


"Do not resolve host names when checking client connections. Use only IP addresses. If you use this option, all Host column values in the grant tables must be IP addresses or localhost. See Section 7.9.8, "How MySQL Uses DNS"." -->

Anyway, you can make this change in the configuration file to be loaded at start-up (it doesn't have to be a command-line option). Simply add "skip-name-resolve" on a new line in the /etc/my.cnf file and restart your DB server. VoilĂ !

Friday, January 7, 2011

Good Design

I've been thinking about good design for the past few months. Apple always seems like the best example of good design because of the simplicity, ease of use and rugged product designs.

I came across this article today about Dieter Rams, a product designer that has influenced many, including Apple.

Here are 10 principles of good design attributed to him:
  • Good design is innovative.

  • Good design makes a product useful.

  • Good design is aesthetic.

  • Good design helps us to understand a product.

  • Good design is unobtrusive.

  • Good design is honest.

  • Good design is durable.

  • Good design is consequent to the last detail.

  • Good design is concerned with the environment.

  • Good design is as little design as possible.

Saturday, January 1, 2011

Happy New Year!

It is here: 2011. To begin this new year my wife and I will be moving to Seattle. We are excited and are looking forward to the change in seasons, again. It has also been over one year since I began web development full-time (I had done it part-time for many years prior). Things have gone very well for a first year, a blessing for sure. So many adventures.

Happy New Year!