John Resig Announces Plans for Second JavaScript Book


On his blog yesterday, John Resig - creator of jQuery - announced plans to begin work on a new JavaScript book. His last book, titled Pro JavaScript Techniques, was a success and is available online at Amazon.

The content of the book has not yet been solidified, but John has posted a question to the readers of his blog requesting some assistance. He's asking: "What are the greatest untold secrets of JavaScript programming that you wish were thoroughly debunked and explained?"

John already has some of his own answers to this question, and they are:

  • What is (function(){ })() and why is it so fundamentally important to modern JavaScript development?
  • What does with(){...} do and why is it so useful?
  • How can arguments.callee change how I work with JavaScript code?
  • How exactly do timers work and how can I best use them?
  • How do I identify and tackle memory leaks in web applications?
  • How do I write a cross browser way of...
    • Getting/setting attributes.
    • Injecting HTML strings.
    • Getting/setting computed css values.
    • Managing DOM events.
    • Writing a CSS selector engine.
    • Doing smooth animations.
  • How can I use verification tools (like JSLint) to my advantage - and write my own?
  • What's the best way to transmit JavaScript files?
  • How do I write my own JavaScript compressor (like Packer)?

If you have any suggestions for questions to be answered in John's new book or would like to give him a word of encouragement, you can leave him some feedback on his related blog post.

The Weight of a Bury


You may have seen my last post about the launch of the Bury Recorder (click here to ready the original post), well, to fully test the application I ran the Digg story of the post through the Bury Recorder application and I got some interesting results.

The story was fully buried (meaning it doesn't show in the upcoming, popular or hot sections of Digg) with 68 diggs. What was incredible was that it was only buried 4 times! Another thing that seems incredible is that the story as of the time of this writing has 107 diggs after being submitted 22 hours ago (when you think about it this is kind of amazing in itself as it was buried way back at 68 diggs). Another interesting this is that the first two buries where between 9:00pm and 11:00pm while the last two where between 5:00am and 7:00am. I'm not sure if this means anything, but it seams interesting how they where spaced. This ratio seems incredible to me that a bury would be given so much weight. The below image shows the actual data from the application.

I'm not sure if this is a fluke and if it would normally take a larger ratio (I know the algorithm changes depending on who dugg the story and the time of day etc..) to fully bury a story, so I'll be using this on some other Digg stories and write about my findings in the near future.

The nice thing about having the application is that we can now find out information like this which will at least take the mystery out of having a story buried on Digg.

While there's no current way to accurately determine the weight of a bury by using this recorder we may be able to gain future insight on the patterns of buried stories (number of buries required to fully bury the story, bury frequency, bury reasons, etc...). This could possibly help combat the rumored Bury Brigade.

You can go directly to the Bury Recorder application by clicking here.

Get Insight into Digg's Bury System with Ajaxonomy's Bury Recorder


If you have been using the popular service Digg you know that it is very easy to submit a story and to see it start to gain traction just to be buried into the dark abyss. What I find particularly frustrating is that you don't know how many people buried the story and the reason for the bury. If you have seen Digg Spy you have noticed that the application does show buries, but you can't just track data for a particular story.

After much frustration Ajaxonomy is now releasing a Bury Recorder application. How the application works is you take the story's URL (This is the URL of the page that the "more" link on the Digg upcoming/popular pages takes you or the page that clicking on the story title takes from your profile i.e.[story]) and put it into the application and once you click "Watch for Buries" the application will start recording any buries that the story receives. This will allow you to see if your story had 100 diggs and 5 buries before it was permanently buried, or if it was more like 100 diggs and 300 buries. The idea is that you would submit a story and then have the recorder capture any buries from the time that you start the application watching for buries. You'll want to note that in this Beta 1.0 release, so currently you have to leave your machine on and the application open in order to make sure that it continues to capture buries.

Before I go a bit into the design and more information on using the application I wanted to say that the application is open source and can be changed and put on your server. If you do change it and/or put it on a different server then we just ask for a link back to us and credit us for the initial creation of the application. Also, if you do decide to put it on a server, let us know and we might link to your server as another option to elevate traffic concerns on our server.

So, now that I have you are excited you will want the link to the application. Click here to be taken to Bury Counter Application.

The following is a quick overview on how to use the application so that it will make it a bit less confusing to use (more than likely most people could figure it out, but this way if it looks like it isn't working you have somewhere to look).

Using the application is as easy as one two three, however there are two ways to use the application below is the first way of using the application.

  1. Open the application (once again the link to the application is here)
  2. Copy and paste the URL of the story into the text box (i.e.[story])
  3. Click the "Watch for Buries" button and then let the application start recoding buries (make sure not to close the application or to turn off/hibernate your computer)

The other way to use the application is as easy as one two (yep, there is no three using this method). Before using the below steps you will need to create a bookmarklet which can be done by following the directions at the bottom of the application.

  1. Click on the bookmarklet from the story page on Digg (this has to be the page that you get when you click on the "more" link in Digg [or from your profile page it would be the page that clicking on the title of the story takes you] which is the page that you would use to get the URL for the first method)
  2. Click the "Watch for Buries" button and then let the application start recoding buries (make sure not to close the application or to turn off/hibernate your computer)

Now that you know how to use the application I will go a bit into how the application was created. This application gets the JSON Feed used by Digg Spy. It does this using Ajax (i.e. the XMLHTTPRequest object) which requires a server side proxy due to domain security restrictions. Due to the way that the JSON is returned from Digg Spy, it doesn't set a variable equal to the returned object, which force us to use the before mentioned server side proxy and an eval statement instead of using DOM manipulation. The application simply polls for updated data every 20 seconds which makes sure we don't miss any data and that it doesn't put too much strain on the server.

You can download the full source code for this Beta 1.0 release here.

This release has been tested in Firefox and Internet Explorer 7. It should work in many more browsers, but has not yet been tested. If you try it in a different browser and either find bugs or the application works perfectly then we would appreciate if you contact us regarding your testing results.

Also, if you do any cool things with the application or if you have any cool ideas then feel free to blog about it on this blog. Just sign up for a free account and once you login click on "Create Content" => "Blog Entry" and then write your post. If the admins of this site feel that your post is an interesting one they will move it to the home page.

I hope that you find this application useful and that you keep checking for new version and improvements.

Java Tip of the Day: Using Interfaces


Anyone who has ever programmed in Java is intimately familiar with using the interface, Java's purely abstract class and cleaner answer to C++'s concept of multiple inheritance. It's an old idea by now, and yet new ideas can somehow breathe life into old ones. And this is what has happened recently with interfaces.

The obvious use of interfaces is in abstracting behavior (i.e., methods) of objects and allowing implementing classes to implement this behavior. For example, take an interface for returning an SQL connection from different sources:

public interface ConnectionFactory() { 
     public Connection getConnection(); 

public class JndiConnectionFactory() { 
     public Connection getConnection()  { 
          // get DataSource from JNDI, return Connection 

public class SingleConnectionFactory() {
     public Connection getConnection()  { 
          // create Connection using properties and return

Here the ConnectionFactory interface is defined using two different implementations, one for container environments (JNDI) and the other when used in a non-container (stand-alone) environment (JDBC).

OK, this is old news. But there is another, more subtle, use of interfaces if you think about them as a replacement for multiple inheritance or as being somewhat akin to "aspects" in AOP: that interfaces are a way of layering behavior onto an object. Thought of this way, the style of defining monolithic interfaces with many methods becomes less desirable than defining many interfaces that define one or a few methods that describe a single aspect of an object's behavior.

Let me give two examples of this idea, one from the core language itself and the other my own.

First, the core language. When Java 5 was introduced, a few simple one-method common interfaces were also introduced. One of these was the java.lang.Iterable interface which defined the simple method public Iterator iterator(). Combined with the introduction of generics, this allowed the addition of a new "foreach" construct that looks like:

for(T obj : Iterable<T>) {
     //...logic here...

By isolating the simple behavior of returning an iterator, Java 5 was able to have any class that implements this be used in the new "foreach" loop (not just Collections). Joshua Bloch has also proposed an addition to Java 7 of having "resource blocks" where any object that implements a Closeable interface will be automatically closed at the end of the block.

The second example has to do with defining public and private APIs. What if, in the example above, I were using the ConnectionFactory in designing a framework that would be used as a library in other Java applications, and I wanted to define an internally used Connection that I did not want exposed to the client of the library? And that I also want to use the JndiConnectionFactory and SingleConnectionFactory implementations above with this internally-used Connection?

One approach might be to create two different ConnectionFactory objects, exposing one to the client and the other hidden internally to the framework. This is a perfectly reasonable approach, but let's say (for purposes of illustration) that this is cumbersome because of the framework design and you would rather have a single, centralized ConnectionFactory that defines another method, public Connection getInternalConnection(). This presents a problem because you really don't want to expose this private API method to the client (one of Java's limitations is that there aren't a lot of options for information hiding between classes: package private or protected).

A clever solution to this issue might be to define two separate interfaces, one that exposes a public API and the other that exposes a private API. This way you can have the various implementing classes implement both the public and private API, and you only hand out a reference to the public API to the client.

public interface InternalConnectionFactory {
     public Connection getInternalConnection();

public class JndiConnectionFactory 
     implements ConnectionFactory, InternalConnectionFactory { ... }

public class ConnectionFactoryWrapper implements ConnectionFactory {
     private ConnectionFactory wrappedFactory;
     public ConnectionFactoryWrapper(ConnectionFactory wrappedFactory) {
          this.wrappedFactory = wrappedFactory;
     //delegate methods here


// This class defines the way the client gets a connection factory...
public class AppContext {
     public ConnectionFactory getConnectionFactory() { 
          return new ConnectionFactoryWrapper(connFactoryImpl);

The wrapper class here is used to protect the internal methods from being accessed by the client (could also use a dynamic proxy here). This promotes a cleaner, simpler public API without "polluting" it with methods meant to be only used internally by the framework.

OSGi and the upcoming Java Module System handle this need in a much more robust way, but using OSGi may not be practical for most programmers and use of JSR-277 may be very far away for most organizations which are just migrating to Java 5. Clever use of interfaces addresses this need in a simple and practical way.

Spellify 1.0 - An Automatic Text Field Spell Checker

Spellify is a based spell checker for form fields that utilizes Google as its spell check engine. Last month, Spellify released version 1.0 - officially taking the application out of beta and bringing it to prime time.

Check out the short video below for a demo:

Spell Check for Multiple Lines:

The Forms Demo:

- PHP 4+ with CURL library installed (developed using PHP 4.4.6)
- 1.8.0 (latest prototype.js, and effects.js required)

Browser Compatibility
IE7, FF2, IE6, Opera 9, Safari 3. May work in other browsers as well.

Visit the download page

Ajax Testing Tools


Over at the Ajax Blog ( they have put together a good list of Ajax testing tools.

Below is an except from the post with a list of tools.

Here are the three websites you can use for testing your Ajax based program:

1. Squish ( – Specifically called Squish for Web, this program effectively recognizes HTML if they are properly coded. But what is more important for an Ajax based program is its ability to handle DOM. This (DOM) element is very important for Ajax as they usually deal with http-like commands and requests. Squish has the ability to analyze popular web browsers and we’re not only talking about IE and Mozilla. Firefox and Safari could easily be handled by Squish. Konqueror KDEs on different OS could also be analyzed by Squish. Remember that this is a multiplatform so Squish can analyze mixed and matched softwares on different browsers and IDEs. There are three downsides on this product though: first, it’s highly technical as these are all numbers; second, price which could easily reach US$ 2,500; and Ajax stress test could not be tested in this tool. The last factor is important since millions are now geared to use Ajax but hopefully, the next update of Squish could have this.
2. WAPT ( – If you don’t have a stress testing test because you purchased Squish, this is the additional tool for you. WAPT is a tool dedicated in testing Ajax based websites for stress. As of this writing, the latest version is at 5.0 which are really effective for websites. Although this tool is not optimized for Vista yet, it can get really effective once it has been established. You’ll be able to see graphics on how fast your website can handle multiple visitors. This tool will also tell you how many visitors can be serviced your website seamlessly. You don’t even have to limit yourself to http sites; WAPT 5.0 was also optimized for secured (https) websites.
3. Charles Web Debugging Tool ( – Available for determining IE, Mozilla and Safari browser functions, you’ll be able to get more than just simple Ajax applications tested. The website will actually be simulated by this tool and see how Ajax will run when applied in a secured website. After this tool is ran through Charles the results will be shown in a tree format. If you want to have a good and easily understandable presentation of your website after it’s debugged this is your tool. Ajax stress could be tested by changing the configuration of the simulation. You don’t need to run in beta version if you want to test if it can handle great stress.

Click here to read the full post.

If you use any of the tools leave them in the comments, I would love to hear about your experience.

Buried in 2 and a Half Hours


This is a follow up to my post on If you Bury a Digg Story at Least Comment. The story was buried in an amazing 2 and a half hours (even though there where over 30 diggs)! It was amazing to me that it was buried so fast even with the fact that their where so many diggs (it had about 50 diggs in just 3 hours).

After this happened I contacted Digg and asked if there was any way to appeal such a bury and below is the response that I received.

That story was reported as lame and subsequently removed by the Digg community. Buried stories do not get re-instated as that undermines the decisions of the Digg community. Please read our FAQ ( for more information on buried stories.

I understand Diggs position, however, it would be nice to know at least how many buries a story receives and a way for the story to still get made popular. I would like to know if a bury is given weight than a digg which would make it easy to have stories removed, especially when you can leave a reason like lameness. I would love to see some kind of appeal process that would make it possible for stories that have been buried to have a second chance instead of just being written off.

So, if you digg this story remember you only have 2 and a half hours to digg it up before it will be buried. It looks like the Bury Brigade is quite powerful indeed.

If you agree with the above please digg this and pass it on. Let us try to get it pass the Bury Brigade.

Ruby 1.9.0 Just Released


I just found out that Ruby 1.9.0 (a development release) has just been released today. I haven't had a chance to look into it much yet, but below is where you can download the new release. However, you can view the change log here.

You can fetch it from:




I'll write more about the new release of Ruby as soon as I get a good chance to look at the new features, but until then check out the new release using the above links.

If you happen to know any thing about the new Release feel free to login to your Ajaxonomy account and leave a message or click on "Create Content" => "Blog Entry" and then write a post about it on this blog. If the post is an interesting one it will be moved to the home page.

If You Bury a Digg Story at Least Comment


I have been posting to Digg for a while now, however recently my Diggs seem to continually be buried. Is this the rumored Bury Brigade? I don't have any way of knowing why these stories have been buried, they are not spam posts and are interesting original content. It seems like whenever I post a story and it gets in the "Hot" section that it gets buried soon after it starts to raise.

Since Digg gives me no way of knowing who or why my stories are being buried they give me no chance of knowing what I should do differently or a way to appeal that it was buried. All I want from people that bury my stories is the courtesy of leaving a comment that lets me know why they are burying the story. Kevin Rose some visibility would be very helpful.

So please if you bury my stories just leave a comment, so I know what you think I did wrong.

Internet Explorer 8 Passes Acid2 Test


Well, it's time for the devil to grab a warm jacket. According to the IEBlog, IE 8 passes the Acid2 test, an important and rigorous measurement of web standards compliance. Both IE 7 and Mozilla 2.0.x fail the Acid2 test.

This is what the Acid2 test looks like on Firefox

It looks much worse on IE 7. Don't let the apparent simplicity of the Acid2 test fool you. It is a very complex page that extensively uses CSS positioning, CSS tables, overlapping content, and even illegal CSS that should be ignored. See an explanation for details.

The latest salvo in the new browser wars?

Update: Firefox 3 passed the Acid2 test as of the 12/08/2006 build (trunk).

Syndicate content