Ajax

Google Chart API Tools

Tagged:  

You have probably heard a lot about the Google Chart API. Well, there are a few tools and scripts that are available that can be useful in creating charts using the API.

The first tool is a chart creator made by Dion Almaer (you may know Dion from Ajaxian). The application is a nice little chart creation tool that was created using Ext 2.0 and the Google Chart API. The application is aptly named ChartMaker.

Below is a demo of the application:


You can read all about the application at Dion's personal blog. Also, you can go to the application by clicking here or click here to get the code.

Nice job Dion, as this application is a great use of Ext 2.0 and the Google Chart API.

The second tool that I found was posted over at Wait till I come! and is a script that takes data from a HTML table and converts it into a chart.

Below is an excerpt from the post.

Generating charts from accessible data tables and vice versa using the Google Charts API

Google have lately been praised for their chart API and my esteemed colleague Ed Eliot has a workaround for its restrictions in terms of caching and server hits.

I played around a bit with it and thought it very cool but it felt a bit clunky to add all these values to a URL when they could be in the document for those who cannot see pie charts. This is why I wrote a small script that converts data tables to charts using the API and a wee bit of JavaScript.

Using this script you can take a simple, valid and accessible data table like the following and it gets automatically converted to a pie chart.


<table class="tochart size300x100 color990000" summary="Browsers for this site, March 2007">
  <caption>Browsers</caption>
  <thead>
    <tr><th scope="col">Browser</th><th scope="col">Percent</th></tr>

  </thead>
  <tbody>
    <tr><td>Firefox</td><td>60</td></tr>
    <tr><td>MSIE</td><td>25</td></tr>

    <tr><td>Opera</td><td>10</td></tr>
    <tr><td>Safari</td><td>5</td></tr>

  </tbody>
</table>

Simply add the script to the end of the body and it’ll convert all tables with a class called “tochart”. You can define the size (widthxheight) and the colour as a hexadecimal triplet as shown in this example. If you leave size and colour out, the script will use presets you can alter as variables in the script itself.

You can view a demo of the above by clicking here and you can download the demo code by clicking here. You can read the full post here.

It is great to see some good development in the Google Chart API arena.

If know of any other cool applications that use libraries like the Google Chart API we would love to hear about them. You can leave them in the comments or if you sign up for a free account on this blog, you can blog about it on Ajaxonomy.com.

Cross-Site XMLHttpRequest in Firefox 3

Tagged:  

Over at John Resig's blog (you may know John from his work on jQuery) he has an interesting post about using the XMLHttpRequest object to get cross-domain data without a cross-domain proxy in Firefox 3 (currently in beta). The built-in cross-site XMLHttpRequest feature is new to Firefox 3.

Below is an excerpt from John's post.

In a nutshell, there are two techniques that you can use to achieve your desired cross-site-request result: Specifying a special Access-Control header for your content or including an access-control processing instruction in your XML.
More information can be found in the documentation but here's a quick peek at what your code might look like:
An HTML document (served via PHP) that specifies an Access-Control header: (Demo - FF3 Only)

<?php header('Access-Control: allow <*>'); ?>
<b>John Resig</b>

An XML document that specifies an access-control processing instruction: (Demo - FF3 Only)

<?xml version="1.0" encoding="UTF-8"?>

<?access-control allow="*"?>
<simple><name>John Resig</name></simple>

Now what's especially nice about all this is that you don't have to change a single line of your client-side code to make this work! Take, for example, this page which requests an HTML file from a remote domain - and, specifically, the JavaScript within it:

var xhr = new XMLHttpRequest();
xhr.open("GET", "http://dev.jquery.com/~john/xdomain/test.php", true);

xhr.onreadystatechange = function(){
  if ( xhr.readyState == 4 ) {

    if ( xhr.status == 200 ) {
      document.body.innerHTML = "My Name is: " + xhr.responseText;

    } else {
      document.body.innerHTML = "ERROR";

    }
  }
};
xhr.send(null);

You can read John's full post here.

As a person that loves the powers of web services from different domains (one thing I love about JSON is that by using DOM manipulation to load the code you can get around cross-domain issues without the overhead of a server side proxy) I hope that this feature catches on with more browsers as new versions of each browser is released so that it would have cross-browser support.

del.icio.us Style Tag Suggestion with jQuery

Remy Sharp has recently posted an entry on his blog about del.icio.us style tag suggestion using jQuery. If you've used del.icio.us then you're probably familiar with their tag suggestion feature as you type in tags for a bookmark. Remy has encapsulated this piece of functionality in a jQuery library he's made available for download.

Check out the demo

The plugin has been successfully tested with:
IE 7, Firefox 2, Safari 3, and Opera 9.

Read the full post on Remy's blog to find out more about this plugin and to download the source code.

OpenAjax Hub 1.0 Approved

Tagged:  

Jon Ferraiolo announced today that the OpenAjax Hub 1.0 Specification was approved by the Interoperability Working Group, the members, and the OpenAjax Alliance Steering Committee. This represents the first approved Specification for OpenAjax Alliance per the terms of the OpenAjax Alliance Members Agreement.

OpenAjax Hub provides standard JavaScript that, when included with an Ajax-powered Web application, promotes the ability for multiple Ajax toolkits to work together on the same page. The central feature of the OpenAjax Hub is its publish/subscribe event manager (the “pub/sub manager”), which enables loose assembly and integration of Ajax components. With the pub/sub manager, one Ajax component can publish (i.e., broadcast) an event to which other Ajax components can subscribe, thereby allowing these components to communicate with each other through the Hub, which acts as an intermediary message bus. The umbrella use case for the OpenAjax Hub is the set of scenarios in which an Ajax developer needs to deploy a single application that uses multiple Ajax libraries simultaneously.

> Read the final approved specification

The next phase is OpenAjax Hub 1.1, which will add secure mashup support and client-server communications. We expect an initial draft specifications and open source to appear in the next few weeks.

If you're unfamilar with The OpenAjax Alliance:

The OpenAjax Alliance is an organization of 80+ vendors, open source initiatives, and Web developers dedicated to the successful adoption of open and interoperable Ajax-based Web technologies. The prime objective is to accelerate customer success with Ajax by promoting the ability to mix and match solutions from Ajax technology providers and by helping to drive the Ajax ecosystem.

Last year, 19 Ajax toolkits participated in the Second OpenAjax InteropFest for the OpenAjax Hub 1.0 spec and passed the test. The toolkits in alphabetical order are:

> Read the full announcement on the OpenAjax Alliance Blog

Graceful handling of anchors with jQuery

Over at Hainhealt.com they have an interesting post on handling anchors gracefully with jQuery.

Below is an excerpt from the post:

I've come to use this quite often which eventually leads to a considerable amount of if statements.

Which is ugly. And since I don't like ugliness, I've coded myself a small anchor handler for jQuery. Looking at the code I think I could quite easily make it compatible with the Prototype framework too, but I'll keep that for another post :D

(function(){
  url = window.location.href, handlers = [];

  jQuery.extend({
    anchorHandler: {
      add: function(regexp, callback) {

        if (typeof(regexp) == 'object') {
          jQuery.map(regexp, function(arg){

            args = {r: arg[0], cb: arg[1]};});}

        else  args = {r: regexp, cb: callback};

        handlers.push(args);
        return jQuery.anchorHandler;
      }

    }
  })(document).ready(function(){
    jQuery.map(handlers, function(handler){

      match = url.match(handler.r) && url.match(handler.r)[0] || false;

      if (match) {
      	handler.cb.apply(this, [match, (url.match(/#.*/)[0] || false)]);

      }});});
})();

And I can add triggers like this:

$.anchorHandler
  .add(/\/\#ch\-cheatsheet/,    h.comment.showCheatsheet)

  .add(/\/\#comment\-compose/,  h.comment.showCompose)
  .add(/\/\#comment\-\d+/,      h.comment.focus);

The first argument is a regular expression or a string that is passed to the function match, the second argument is the callback function.

The method also accept arrays as argument like this:

$.anchorHandler.add([
  [/\/\#ch\-cheatsheet/,   h.comment.showCheatsheet],

  [/\/\#comment\-compose/, h.comment.showCompose],
  [/\/\#comment\-\d+/,     h.comment.focus]]);

The callback function receive 2 arguments. The matched bit of the anchor with the anchor itself.

Read the full post here.

This technique is very useful for downgrading your applications on browsers where JavaScript is disabled.

Ajax Pagination Script

Tagged:  

The folks over at Dynamic Drive have put together a nice ajax based pagination script that lets you draw content from multiple pages and display them on demand, using Ajax. Pagination links are automatically created, with each page downloaded only when requested (speeding up delivery and saving on bandwidth. An overview of this script now:

  • The pagination interface for each Ajax Pagination instance is "free floating", meaning it can be positioned
    anywhere on the page and repeated multiple times as well.
  • Each page within the paginated content is fetched individually and only
    when requested for sake of efficiency.
  • The settings for each Ajax Pagination instance is stored neatly in a
    variable object for ease of portability. This variable can be manually
    defined or easily dynamically written out based on information returned from
    the sever, such as via PHP/ MySQL.
  • The entire paginated content can be refreshed with new data on demand,
    with the pagination links updated automatically as well.

This script is ideal for showing multi-page content such as
"user comments" without reloading the rest of the page each time a
comment page is requested.

Click here to visit the site and to download the script.

Dynamic Drive also has a Virtual Pagination Script that lets you "transform long content on your page into a series of virtual pages, browseable via pagination links. The broken up content pieces are separated simply via arbitrary DIVs (or another block level element of your choice) with a shared class name." Read the full post on the Virtual Pagination Script

Let us know your experience with the Ajax Pagination Script or with any other pagination script you've worked with.

Get Insight into Digg's Bury System with Ajaxonomy's Bury Recorder

Tagged:  

If you have been using the popular service Digg you know that it is very easy to submit a story and to see it start to gain traction just to be buried into the dark abyss. What I find particularly frustrating is that you don't know how many people buried the story and the reason for the bury. If you have seen Digg Spy you have noticed that the application does show buries, but you can't just track data for a particular story.

After much frustration Ajaxonomy is now releasing a Bury Recorder application. How the application works is you take the story's URL (This is the URL of the page that the "more" link on the Digg upcoming/popular pages takes you or the page that clicking on the story title takes from your profile i.e. http://digg.com/[story]) and put it into the application and once you click "Watch for Buries" the application will start recording any buries that the story receives. This will allow you to see if your story had 100 diggs and 5 buries before it was permanently buried, or if it was more like 100 diggs and 300 buries. The idea is that you would submit a story and then have the recorder capture any buries from the time that you start the application watching for buries. You'll want to note that in this Beta 1.0 release, so currently you have to leave your machine on and the application open in order to make sure that it continues to capture buries.

Before I go a bit into the design and more information on using the application I wanted to say that the application is open source and can be changed and put on your server. If you do change it and/or put it on a different server then we just ask for a link back to us and credit us for the initial creation of the application. Also, if you do decide to put it on a server, let us know and we might link to your server as another option to elevate traffic concerns on our server.

So, now that I have you are excited you will want the link to the application. Click here to be taken to Bury Counter Application.

The following is a quick overview on how to use the application so that it will make it a bit less confusing to use (more than likely most people could figure it out, but this way if it looks like it isn't working you have somewhere to look).

Using the application is as easy as one two three, however there are two ways to use the application below is the first way of using the application.

  1. Open the application (once again the link to the application is here)
  2. Copy and paste the URL of the story into the text box (i.e. http://digg.com/[story])
  3. Click the "Watch for Buries" button and then let the application start recoding buries (make sure not to close the application or to turn off/hibernate your computer)

The other way to use the application is as easy as one two (yep, there is no three using this method). Before using the below steps you will need to create a bookmarklet which can be done by following the directions at the bottom of the application.

  1. Click on the bookmarklet from the story page on Digg (this has to be the page that you get when you click on the "more" link in Digg [or from your profile page it would be the page that clicking on the title of the story takes you] which is the page that you would use to get the URL for the first method)
  2. Click the "Watch for Buries" button and then let the application start recoding buries (make sure not to close the application or to turn off/hibernate your computer)

Now that you know how to use the application I will go a bit into how the application was created. This application gets the JSON Feed used by Digg Spy. It does this using Ajax (i.e. the XMLHTTPRequest object) which requires a server side proxy due to domain security restrictions. Due to the way that the JSON is returned from Digg Spy, it doesn't set a variable equal to the returned object, which force us to use the before mentioned server side proxy and an eval statement instead of using DOM manipulation. The application simply polls for updated data every 20 seconds which makes sure we don't miss any data and that it doesn't put too much strain on the server.

You can download the full source code for this Beta 1.0 release here.

This release has been tested in Firefox 2.0.0.11 and Internet Explorer 7. It should work in many more browsers, but has not yet been tested. If you try it in a different browser and either find bugs or the application works perfectly then we would appreciate if you contact us regarding your testing results.

Also, if you do any cool things with the application or if you have any cool ideas then feel free to blog about it on this blog. Just sign up for a free account and once you login click on "Create Content" => "Blog Entry" and then write your post. If the admins of this site feel that your post is an interesting one they will move it to the home page.

I hope that you find this application useful and that you keep checking for new version and improvements.

Spellify 1.0 - An Automatic Text Field Spell Checker

Spellify is a script.aculo.us/prototype based spell checker for form fields that utilizes Google as its spell check engine. Last month, Spellify released version 1.0 - officially taking the application out of beta and bringing it to prime time.

Check out the short video below for a demo:

Homepage: spellify.com
Spell Check for Multiple Lines: spellify.com/extended.html

The Forms Demo:

Requirements
- PHP 4+ with CURL library installed (developed using PHP 4.4.6)
- script.aculo.us 1.8.0 (latest prototype.js, and effects.js required)

Browser Compatibility
IE7, FF2, IE6, Opera 9, Safari 3. May work in other browsers as well.

Visit the download page

Ajax Testing Tools

Tagged:  

Over at the Ajax Blog (ajaxwith.com) they have put together a good list of Ajax testing tools.

Below is an except from the post with a list of tools.

Here are the three websites you can use for testing your Ajax based program:

1. Squish (froglogic.com) – Specifically called Squish for Web, this program effectively recognizes HTML if they are properly coded. But what is more important for an Ajax based program is its ability to handle DOM. This (DOM) element is very important for Ajax as they usually deal with http-like commands and requests. Squish has the ability to analyze popular web browsers and we’re not only talking about IE and Mozilla. Firefox and Safari could easily be handled by Squish. Konqueror KDEs on different OS could also be analyzed by Squish. Remember that this is a multiplatform so Squish can analyze mixed and matched softwares on different browsers and IDEs. There are three downsides on this product though: first, it’s highly technical as these are all numbers; second, price which could easily reach US$ 2,500; and Ajax stress test could not be tested in this tool. The last factor is important since millions are now geared to use Ajax but hopefully, the next update of Squish could have this.
2. WAPT (loadtestingtool.com) – If you don’t have a stress testing test because you purchased Squish, this is the additional tool for you. WAPT is a tool dedicated in testing Ajax based websites for stress. As of this writing, the latest version is at 5.0 which are really effective for websites. Although this tool is not optimized for Vista yet, it can get really effective once it has been established. You’ll be able to see graphics on how fast your website can handle multiple visitors. This tool will also tell you how many visitors can be serviced your website seamlessly. You don’t even have to limit yourself to http sites; WAPT 5.0 was also optimized for secured (https) websites.
3. Charles Web Debugging Tool (xk72.com/charles) – Available for determining IE, Mozilla and Safari browser functions, you’ll be able to get more than just simple Ajax applications tested. The website will actually be simulated by this tool and see how Ajax will run when applied in a secured website. After this tool is ran through Charles the results will be shown in a tree format. If you want to have a good and easily understandable presentation of your website after it’s debugged this is your tool. Ajax stress could be tested by changing the configuration of the simulation. You don’t need to run in beta version if you want to test if it can handle great stress.

Click here to read the full post.

If you use any of the tools leave them in the comments, I would love to hear about your experience.

Buried in 2 and a Half Hours

Tagged:  

This is a follow up to my post on If you Bury a Digg Story at Least Comment. The story was buried in an amazing 2 and a half hours (even though there where over 30 diggs)! It was amazing to me that it was buried so fast even with the fact that their where so many diggs (it had about 50 diggs in just 3 hours).

After this happened I contacted Digg and asked if there was any way to appeal such a bury and below is the response that I received.

That story was reported as lame and subsequently removed by the Digg community. Buried stories do not get re-instated as that undermines the decisions of the Digg community. Please read our FAQ (digg.com/faq) for more information on buried stories.

I understand Diggs position, however, it would be nice to know at least how many buries a story receives and a way for the story to still get made popular. I would like to know if a bury is given weight than a digg which would make it easy to have stories removed, especially when you can leave a reason like lameness. I would love to see some kind of appeal process that would make it possible for stories that have been buried to have a second chance instead of just being written off.

So, if you digg this story remember you only have 2 and a half hours to digg it up before it will be buried. It looks like the Bury Brigade is quite powerful indeed.

If you agree with the above please digg this and pass it on. Let us try to get it pass the Bury Brigade.

Syndicate content