Debugging Data Issues in Ajax Applications


One of the things that changes when developing when we develop Ajax applications is the visibility of the data being received by the application. In order to get visibility of what is happening on the back end you will probably want to get a good Traffic Sniffer. This will allow you some visibility on what the XMLHttpRequest object is doing.

If you already use Greasemonkey you will probably want to check out the XMLHttpRequest Tracing and XMLHttpRequest Debugging extensions. The XMLHttpRequest tracing extension allows you to unobtrusively log traffic to the JavaScript console and the XMLHttpRequest Debugging extension is a powerful interactive tool that not only shows the messages, but also lets you set filters and configure the display. As with Greasemonkey these extensions are open source.

Fiddler is a Windows proxy specifically designed for analyzing browser service traffic. Fiddler is free from Eric Lawrence and Microsoft.

Another great tool is Live HTTP Headers which is a Firefox extension that reveals in formation about the HTTP headers. The extension will add features to existing menus on Firefox to allow you to get more information about the HTTP headers. For example it will add a "Headers" tab in the "View Page Info" of a web page.

While these are not the only tools for doing this, these are a few good tools. If you know of any other good tools for Traffic Sniffing please leave it in the comments so that the community can know about it. Now go out there and find any issues with the data coming into your application through the XMLHttpRequest object.



This blog was created by professional web developers in an attempt to build a community for sharing and discussing topics related to AJAX and other interesting web technologies. As a registered user of Ajaxonomy.com, you can create your own personal blog. Registration is free - just sign up over on the right side of page (or log in with your OpenID). As a registered user you will be able to write and edit posts and have your own RSS feed. If that wasn't enough, your posts even have an opportunity to be promoted to the homepage and main rss feed of the site.

Besides our main blog and user blogs, this site includes a section titled "global", which stands for Global Ajaxonomy - your source for news, aggregrated from other sites we recognize as valuable sources for Ajax and Web 2.0 information. Your suggestions are welcome for more sites to add to Global Ajaxonomy.

We also look forward to your feedback and hope you would contact us with requests for any specific topics not already on the site.

-The Ajaxonomy Team

Cross-Domain XML Access - a.k.a. Server Side Proxy


In developing Ajax applications there are quite a few times that you may need to consume some type of XML from a different domain (especially in the case of mashups). Since retrieving XML from a different domain through the browser is a security restriction you have to find another way of getting to the XML. One way to get around this is to use JSON data however many API's are only available in XML, so it is very possible that XML will be your only option.

In order to get around these security restrictions we need to use a server side proxy. How this works is that the server retrieves the XML from the other domain (which is not stopped by security restrictions) and then returns it to the Ajax application using the XMHTTPRequest object. This tricks the browser into thinking that the XML comes from the same domain.

So without further delay below is some PHP code that I have used to create a server side proxy. You just need to pass in the URL using the proxy_url variable that is passed to the PHP through the URL.

//          FILE: proxy.php
// LAST MODIFIED: 2006-03-23
//        AUTHOR: Troy Wolf 
//   DESCRIPTION: Allow scripts to request content they otherwise may not be
//                able to. For example, AJAX (XmlHttpRequest) requests from a
//                client script are only allowed to make requests to the same
//                host that the script is served from. This is to prevent
//                "cross-domain" scripting. With proxy.php, the javascript
//                client can pass the requested URL in and get back the
//                response from the external server.
//         USAGE: "proxy_url" required parameter. For example:
//                http://www.mydomain.com/proxy.php?proxy_url=http://www.yahoo.com

// proxy.php requires Troy's class_http. http://www.troywolf.com/articles
// Alter the path according to your environment.

$proxy_url = isset($_GET['proxy_url'])?$_GET['proxy_url']:false;
if (!$proxy_url) {
    header("HTTP/1.0 400 Bad Request");
    echo "proxy.php failed because proxy_url parameter is missing";

// Instantiate the http object used to make the web requests.
// More info about this object at www.troywolf.com/articles
if (!$h = new http()) {
    header("HTTP/1.0 501 Script Error");
    echo "proxy.php failed trying to initialize the http object";

$h->url = $proxy_url;
$h->postvars = $_POST;
if (!$h->fetch($h->url)) {
    header("HTTP/1.0 501 Script Error");
    echo "proxy.php had an error attempting to query the url";

// Forward the headers to the client.
$ary_headers = split("\n", $h->header);
foreach($ary_headers as $hdr) { header($hdr); }

// Send the response body to the client.
echo $h->body;

Below is the class_http that is referenced in the above.

* Filename.......: class_http.php
* Author.........: Troy Wolf [troy@troywolf.com]
* Last Modified..: Date: 2006/03/06 10:15:00
* Description....: Screen-scraping class with caching. Includes image_cache.php
                   companion script. Includes static methods to extract data
                   out of HTML tables into arrays or XML. Now supports sending
                   XML requests and custom verbs with support for making
                   WebDAV requests to Microsoft Exchange Server.

class http {
    var $log;
    var $dir;
    var $name;
    var $filename;
    var $url;
    var $port;
    var $verb;
    var $status;
    var $header;
    var $body;
    var $ttl;
    var $headers;
    var $postvars;
    var $xmlrequest;
    var $connect_timeout;
    var $data_ts;
    The class constructor. Configure defaults.
    function http() {
        $this->log = "New http() object instantiated.<br />\n";
        Seconds to attempt socket connection before giving up.
        $this->connect_timeout = 30;
        Set the 'dir' property to the directory where you want to store the cached
        content. I suggest a folder that is not web-accessible.
        End this value with a "/".
        $this->dir = realpath("./")."/"; //Default to current dir.


        return true;
    fetch() method to get the content. fetch() will use 'ttl' property to
    determine whether to get the content from the url or the cache.
    function fetch($url="", $ttl=0, $name="", $user="", $pwd="", $verb="GET") {
        $this->log .= "--------------------------------<br />fetch() called
\n"; $this->log .= "url: ".$url."<br />\n"; $this->status = ""; $this->header = ""; $this->body = ""; if (!$url) { $this->log .= "OOPS: You need to pass a URL!<br />"; return false; } $this->url = $url; $this->ttl = $ttl; $this->name = $name; $need_to_save = false; if ($this->ttl == "0") { if (!$fh = $this->getFromUrl($url, $user, $pwd, $verb)) { return false; } } else { if (strlen(trim($this->name)) == 0) { $this->name = MD5($url); } $this->filename = $this->dir."http_".$this->name; $this->log .= "Filename: ".$this->filename."<br />"; $this->getFile_ts(); if ($this->ttl == "daily") { if (date('Y-m-d',$this->data_ts) != date('Y-m-d',time())) { $this->log .= "cache has expired<br />"; if (!$fh = $this->getFromUrl($url, $user, $pwd, $verb)) { return false; } $need_to_save = true; if ($this->getFromUrl()) { return $this->saveToCache(); } } else { if (!$fh = $this->getFromCache()) { return false; } } } else { if ((time() - $this->data_ts) >= $this->ttl) { $this->log .= "cache has expired<br />"; if (!$fh = $this->getFromUrl($url, $user, $pwd, $verb)) { return false; } $need_to_save = true; } else { if (!$fh = $this->getFromCache()) { return false; } } } } /* Get response header. */ $this->header = fgets($fh, 1024); $this->status = substr($this->header,9,3); while ((trim($line = fgets($fh, 1024)) != "") && (!feof($fh))) { $this->header .= $line; if ($this->status=="401" and strpos($line,"WWW-Authenticate: Basic realm=\"")===0) { fclose($fh); $this->log .= "Could not authenticate<br />\n"; return FALSE; } } /* Get response body. */ while (!feof($fh)) { $this->body .= fgets($fh, 1024); } fclose($fh); if ($need_to_save) { $this->saveToCache(); } return $this->status; } /* PRIVATE getFromUrl() method to scrape content from url. */ function getFromUrl($url, $user="", $pwd="", $verb="GET") { $this->log .= "getFromUrl() called<br />"; preg_match("~([a-z]*://)?([^:^/]*)(:([0-9]{1,5}))?(/.*)?~i", $url, $parts); $protocol = $parts[1]; $server = $parts[2]; $port = $parts[4]; $path = $parts[5]; if ($port == "") { if (strtolower($protocol) == "https://") { $port = "443"; } else { $port = "80"; } } if ($path == "") { $path = "/"; } if (!$sock = @fsockopen(((strtolower($protocol) == "https://")?"ssl://":"").$server, $port, $errno, $errstr, $this->connect_timeout)) { $this->log .= "Could not open connection. Error " .$errno.": ".$errstr."<br />\n"; return false; } $this->headers["Host"] = $server.":".$port; if ($user != "" && $pwd != "") { $this->log .= "Authentication will be attempted<br />\n"; $this->headers["Authorization"] = "Basic ".base64_encode($user.":".$pwd); } if (count($this->postvars) > 0) { $this->log .= "Variables will be POSTed<br />\n"; $request = "POST ".$path." HTTP/1.0\r\n"; $post_string = ""; foreach ($this->postvars as $key=>$value) { $post_string .= "&".urlencode($key)."=".urlencode($value); } $post_string = substr($post_string,1); $this->headers["Content-Type"] = "application/x-www-form-urlencoded"; $this->headers["Content-Length"] = strlen($post_string); } elseif (strlen($this->xmlrequest) > 0) { $this->log .= "XML request will be sent<br />\n"; $request = $verb." ".$path." HTTP/1.0\r\n"; $this->headers["Content-Length"] = strlen($this->xmlrequest); } else { $request = $verb." ".$path." HTTP/1.0\r\n"; } #echo "<br />request: ".$request; if (fwrite($sock, $request) === FALSE) { fclose($sock); $this->log .= "Error writing request type to socket<br />\n"; return false; } foreach ($this->headers as $key=>$value) { if (fwrite($sock, $key.": ".$value."\r\n") === FALSE) { fclose($sock); $this->log .= "Error writing headers to socket<br />\n"; return false; } } if (fwrite($sock, "\r\n") === FALSE) { fclose($sock); $this->log .= "Error writing end-of-line to socket<br />\n"; return false; } #echo "
post_string: ".$post_string; if (count($this->postvars) > 0) { if (fwrite($sock, $post_string."\r\n") === FALSE) { fclose($sock); $this->log .= "Error writing POST string to socket<br />\n"; return false; } } elseif (strlen($this->xmlrequest) > 0) { if (fwrite($sock, $this->xmlrequest."\r\n") === FALSE) { fclose($sock); $this->log .= "Error writing xml request string to socket<br />\n"; return false; } } return $sock; } /* PRIVATE clean() method to reset the instance back to mostly new state. */ function clean() { $this->status = ""; $this->header = ""; $this->body = ""; $this->headers = array(); $this->postvars = array(); /* Try to use user agent of the user making this request. If not available, default to IE6.0 on WinXP, SP1. */ if (isset($_SERVER['HTTP_USER_AGENT'])) { $this->headers["User-Agent"] = $_SERVER['HTTP_USER_AGENT']; } else { $this->headers["User-Agent"] = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"; } /* Set referrer to the current script since in essence, it is the referring page. */ if (substr($_SERVER['SERVER_PROTOCOL'],0,5) == "HTTPS") { $this->headers["Referer"] = "https://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']; } else { $this->headers["Referer"] = "http://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']; } } /* PRIVATE getFromCache() method to retrieve content from cache file. */ function getFromCache() { $this->log .= "getFromCache() called<br />"; //create file pointer if (!$fp=@fopen($this->filename,"r")) { $this->log .= "Could not open ".$this->filename."<br />"; return false; } return $fp; } /* PRIVATE saveToCache() method to save content to cache file. */ function saveToCache() { $this->log .= "saveToCache() called<br />"; //create file pointer if (!$fp=@fopen($this->filename,"w")) { $this->log .= "Could not open ".$this->filename."<br />"; return false; } //write to file if (!@fwrite($fp,$this->header."\r\n".$this->body)) { $this->log .= "Could not write to ".$this->filename."<br />"; fclose($fp); return false; } //close file pointer fclose($fp); return true; } /* PRIVATE getFile_ts() method to get cache file modified date. */ function getFile_ts() { $this->log .= "getFile_ts() called<br />"; if (!file_exists($this->filename)) { $this->data_ts = 0; $this->log .= $this->filename." does not exist<br />"; return false; } $this->data_ts = filemtime($this->filename); return true; } /* Static method table_into_array() Generic function to return data array from HTML table data rawHTML: the page source needle: optional string to start parsing source from needle_within: 0 = needle is BEFORE table, 1 = needle is within table allowed_tags: list of tags to NOT strip from data, e.g. "<a><b>" */ function table_into_array($rawHTML,$needle="",$needle_within=0,$allowed_tags="") { $upperHTML = strtoupper($rawHTML); $idx = 0; if (strlen($needle) > 0) { $needle = strtoupper($needle); $idx = strpos($upperHTML,$needle); if ($idx === false) { return false; } if ($needle_within == 1) { $cnt = 0; while(($cnt < 100) && (substr($upperHTML,$idx,6) != "<TABLE")) { $idx = strrpos(substr($upperHTML,0,$idx-1),"<"); $cnt++; } } } $aryData = array(); $rowIdx = 0; /* If this table has a header row, it may use TD or TH, so check special for this first row. */ $tmp = strpos($upperHTML,"<TR",$idx); if ($tmp === false) { return false; } $tmp2 = strpos($upperHTML,"</TR>",$tmp); if ($tmp2 === false) { return false; } $row = substr($rawHTML,$tmp,$tmp2-$tmp); $pattern = "/<TH>|<TH\ |<TD>|<TD\ /"; preg_match($pattern,strtoupper($row),$matches); $hdrTag = $matches[0]; while ($tmp = strpos(strtoupper($row),$hdrTag) !== false) { $tmp = strpos(strtoupper($row),">",$tmp); if ($tmp === false) { return false; } $tmp++; $tmp2 = strpos(strtoupper($row),"</T"); $aryData[$rowIdx][] = trim(strip_tags(substr($row,$tmp,$tmp2-$tmp),$allowed_tags)); $row = substr($row,$tmp2+5); preg_match($pattern,strtoupper($row),$matches); $hdrTag = $matches[0]; } $idx = strpos($upperHTML,"</TR>",$idx)+5; $rowIdx++; /* Now parse the rest of the rows. */ $tmp = strpos($upperHTML,"<<R",$idx); if ($tmp === false) { return false; } $tmp2 = strpos($upperHTML,"</TABLE>",$idx); if ($tmp2 === false) { return false; } $table = substr($rawHTML,$tmp,$tmp2-$tmp); while ($tmp = strpos(strtoupper($table),"<TR") !== false) { $tmp2 = strpos(strtoupper($table),"</TR"); if ($tmp2 === false) { return false; } $row = substr($table,$tmp,$tmp2-$tmp); while ($tmp = strpos(strtoupper($row),"<TD") !== false) { $tmp = strpos(strtoupper($row),">",$tmp); if ($tmp === false) { return false; } $tmp++; $tmp2 = strpos(strtoupper($row),"</TD"); $aryData[$rowIdx][] = trim(strip_tags(substr($row,$tmp,$tmp2-$tmp),$allowed_tags)); $row = substr($row,$tmp2+5); } $table = substr($table,strpos(strtoupper($table),"</TR>")+5); $rowIdx++; } return $aryData; } /* Static method table_into_xml() Generic function to return xml dataset from HTML table data rawHTML: the page source needle: optional string to start parsing source from allowedTags: list of tags to NOT strip from data, e.g. "<a><b>" */ function table_into_xml($rawHTML,$needle="",$needle_within=0,$allowedTags="") { if (!$aryTable = http::table_into_array($rawHTML,$needle,$needle_within,$allowedTags)) { return false; } $xml = "<?xml version=\"1.0\" standalone=\"yes\" \?\>\n"; $xml .= "<TABLE>\n"; $rowIdx = 0; foreach ($aryTable as $row) { $xml .= "\t<ROW id=\"".$rowIdx."\">\n"; $colIdx = 0; foreach ($row as $col) { $xml .= "\t\t<COL id=\"".$colIdx."\">".trim(utf8_encode(htmlspecialchars($col)))."</COL>\n"; $colIdx++; } $xml .= "\t</ROW>\n"; $rowIdx++; } $xml .= "</TABLE>"; return $xml; } } ?>

Click here to be taken to the page where you can download all of the code above.

So now you have some code to create a server side proxy and can use it to create a great new mashup! If you know of any other good server side proxy code in PHP, Ruby, Java, .Net or any other web language please leave it in the comments.

ajax im 3.2 released


Last month we had a post on the ajax chat and instant messaging clients created by unwieldly studios.

ajax im logo

I've just been notified that a new version their instant messenger ajax im 3.2 has been released, and there are many changes including:

  • a major overhaul of the code: everything (PHP and JS) is now
    object-oriented instead of procedural
  • multiple language support
  • admin panel added, supports searching for users, banning, kicking,
    and making/removing admin
  • PHP-based sessions implemented, so the username and password isn't
    sent on every message request
  • and many others.

ajax im buddy list

You can read more about ajax im at ajaxim.com and you can view a demo at ajaxim.net

Demo accounts are usernames "test", and "test[1 to 4]". Password is "test".

Developing Google Web Toolkit Applications with Netbeans 6


With the recent release of Netbeans 6 there is a lot of interesting things happening in the Java world. One of the best toolkits for making Ajax applications for Java developers is the Google Web Toolkit. In case you have never used the Google Web Toolkit (a.k.a GWT) it makes it much easier for Java programmers to create Ajax applications as you can code JavaScript in Java. Netbeans 6 now has a GWT plug-in to help you develop your GWT based applications in NetBeans combining the power of these great tools.

Below is an excerpt that will get you started in your development.

Although GWT is not supported in NetBeans 6 out of the box, you can download this GWT plug-in and start developing GWT-based applications in NetBeans.

The first step is to install the plug-in using the Plug-in manager. Go to the "Tools | Plugins" menu action, switch to the "Downloaded" tab and locate the plug-in on your disk drive. You don't even have to restart your IDE - GWT support is instantly available for you!

The plug-in is seamlessly integrated into NetBeans IDE. That means that when you create a new web application GWT is shown as one of the available frameworks in the last step of the New project wizard. Here you can specify the GWT installation folder and choose the main entry point class of your GWT module.


You can use the plug-in both if you start from scratch or if you want to work on an existing GWT application. So if you used a different IDE than NetBeans before, it is easy to switch the GWT application to NetBeans. You just point the wizard to your existing GWT application and create a new project from existing sources.

Once you get the project created you can run the application simply by hitting the Run button. There are two options – you can either use the default Run action which deploys the application to the application server and opens your default web browser. The other option is to run GWT in hosted mode and then the GWT development shell is opened and you can see your application inside of it.


Debugging is also supported, so you can just run the Debug action and then the runtime is ran in debug mode. You can simply add breakpoints, step into, step out, etc. as you would expect in a regular web application.


NetBeans already provides lots of tooling out of the box that you can take advantage of, like a powerful XML editor, HTML editor and of course a Java editor with code completion and quick fixes. NetBeans 6 made huge strides in improving the editing experience and it shows when developing GWT applications, too. All GWT APIs are available for you including javadoc documentation, because the GWT jars get automatically added into the project during it's creation.

To learn more about GWT support in NetBeans, the project homepage and screencast can help you get started. Sang Shin, a member of the Sun technology evangelism team, also created a free course for GWT and NetBeans, so you can learn from his examples and do the hands-on lab.

The plug-in was developed as an open source project so we encourage developers to join the project and contribute. There are many ways you can contribute, even submitting an issue or request for enhancement counts.

The future roadmap contains exciting features such as refactoring for GWT and GWT-specific quick fixes in the editor which will make developing GWT code even more comfortable. We are always looking for feedback, so if you try out the project let the developers know what you think.

Click here to read the full post.

Now that you know about these great tools get to coding some great Ajax applications in Java using the Google Web Toolkit!

DWR 3.0 vision

DWR 3.0 is going to be released soon. Following are vision from Joe, Founder of DWR

DWR 2.0 has been out for 6 months or so. At the time, I swore that the next release would be a small one, called 2.1. However it appears that I’m not good at swearing because there is lots in the next release - I think we’re going to have to call it 3.0.

Since 2.0, we've been working on the following adding support for JSON, Bayeux, images/binary file upload/download, a Hub with JMS/OAA support and more reverse ajax APIs. I also want to get some Gears integration going.

There are also a whole set of non-functional things to consider:
* Moving the website to directwebremoting.org
* Restart chasing CLAs, using a foundation CLA rather than a Getahead CLA
* Get some lawyer to create a CLA so Getahead can grant rights to the Foundation (or something similar)
* Get someone to pony up and let us move to SVN
* Unit tests

JSON support: One goal is a RESTian API so you can do something like this: http://example.com/dwr/json/ClassName/methodName?param1=fred;param2=jim and DWR will reply with a JSON structure containing the result of calling className.methodName("fred", "jim"); It would be good to support JSONP along with this. We might also allow POSTing of JSON structures, although I’m less convinced about this because it quickly gets DWR specific, and then what’s the point of a standard. Status - DWR has always used a superset of JSON that I like to call JavaScript. We do this to cope with recursive data, XML objects, and such like. I’ve done most of the work so that DWR can use the JSON subset, but not created the ‘handler’ to interface between the web and a JSON data structure.

Bayeux Support: Greg Wilkins (Jetty) committed some changes to DWR, which need some tweaks to get working properly. Greg still intends to complete this.

File/Image Upload and Download: This allows a Java method to return an AWT BufferedImage and have that image turn up in the page, or to take or return an InputStream and have that populated from a file upload or offered as a file download. I’ve had some bug reports that it doesn’t work with some browsers, also we need to find a way to report progress to a web page simply.

DWR Hub and integration with JMS and OpenAjax Hub: We have a hub, along with one way integration with JMS. The OpenAjax portion will be simple except for the getting the OpenAjax Hub to work smoothly with JMS part. Much of this work has not hit CVS yet, but will do soon.

Reverse Ajax Proxy API Generator: The goal with this is a program that will take JavaScript as input, and output a Java API which, when called, generates JavaScript to send to a browser. Some of this work has been tricky, but then meta-meta-programming was always bound to be hard. This currently mostly works with TIBCO GI, but more work will be needed to allow it to extract type information from other APIs.

DOM Manipulation Library: Currently this is limited to window.alert, mostly because I’m not sure how far to take it. There are a set of things like history, location, close, confirm that could be useful from a server, and that are not typically abstracted by libraries.

Gears Integration: I’ve not started this, but it needs to take higher priority than it currently does. It would be very cool if DWR would transparently detect Gears, and then allow some form of guaranteed delivery including resending of messages if the network disappears for a while.

Website: We need to get the DWR website moved away from the Getahead server, and onto Foundation servers. There will be some URLs to alter as part of this, and I don’t want to lose Google juice by doing it badly.
The documentation for DWR 2 was not up to the standards of 1.x, and while it has been getting better, we could still do more. One thing that has held this back has been lack of a DWR wiki. I hope we can fix this with the server move.

Source Repo: We are currently using CVS hosted by java.net (which is a collab.net instance - yuck). They support SVN, but want to charge me a few hundred dollars to upgrade. Maybe the Foundation can either ridicule them into submission or pay the few hundred dollars for the meta-data so we can host the repo. ourselves. The latter option is probably better.

Unit Tests: I've been trying for ages to find a way to automatically test with multiple browsers and servers. WebDriver looked good for a while, but it doesn't look like the project is going anywhere particularly quickly, so I'm back trying to get Selenium to act in a sane way.

XML versus JSON - What is Best for Your App?


One of the biggest debates in Ajax development today is JSON versus XML. This is at the heart of the data end of Ajax since you usually receive JSON or XML from the server side (although these are not the only methods of receiving data). Below I will be listing pros and cons of both methods.

If you have been developing Ajax applications for any length of time you will more than likely be familiar with XML data. You also know that XML data is very powerful and that there are quite a few ways to deal with the data. One way to deal with XML data is to simply apply a XSLT style sheet to the data (I won't have time in this post to go over the inconsistent browser support of XSLT, but it is something to look into if you want to do this). This is useful if you just want to display the data. However, if you want to do something programmatically with the data (like in the instance of a web service) you will need to parse the data nodes that are returned to the XMLHTTPRequest object (this is done by going through the object tag by tag and getting the needed data). Of course there are quite a few good pre-written libraries that can make going through the XML data easier and I recommend using a good one (I won't go into depth as to what libraries I prefer here, but perhaps in a future post). One thing to note is that if you want to get XML data from another domain you will have to use a server side proxy as the browser will not allow this type of receiving data across domains.

JSON is designed to be a more programmatic way of dealing with data. JSON (JavaScript Object Notation) is designed to return data as JavaScript objects. In an Ajax application using JSON you would receive text through the XMHTTPRequest object (or by directly getting the data through the script tag which I will touch on later) and then pass that text through an eval statement or use DOM manipulation to pass it into a script tag (if you haven't already read my post on using JSON without using eval click here to read the post). The power of this is that you can use the data in JavaScript without any parsing of the text. The down side would be if you just wanted to display the data there is no easy way to do this with JSON. JSON is great for web services that are coming from different domains since if you load the data through a script tag then you can get the data without a domain constraint.

The type of data that you use for your application will depend on quite a few factors. If you are going to be using the data programmatically then in most cases JSON is the better data method to use. On the other hand if you just want to display the data returned I would recommend XML. Of course there may be other factors such as if you are using a web service, which could dictate the data method. If you are getting data from a different domain and JSON is available this may be the better choice. For Ruby on Rails developers, if you would prefer to use JSON and XML is all that is available the 2.0 release allows you to change XML into JSON. One of the biggest reasons that people use JSON is the size of the data. In most cases JSON uses a lot less data to send to your application (of course this may very depending on the data and how the XML is formed).

I would recommend that you take a good look at the application that you are building and decide based on the above which type of data you should deal with. There may be more factors than the above including corporate rules and developer experience, but the above should have given you a good idea as to when to use either data method.

If you would like to contact me regarding any of the above you can make me your friend on Social Ajaxonomy and send a message to me through the service (Click here to go to my profile on Social Ajaxonomy).

Rails 2.0 Finally Released - What's New

Ruby on Rails is one of the most used frameworks for new web 2.0 startups. This 2.0 release is the second recent present that we web developers have recieved this Christmas (the first was OpenID 2.0). Since Rails 2.0 was recently released I wanted to write about the recent changes.

Below is a rundown of the changes write from the Ruby on Rails blog.

Action Pack: Resources

This is where the bulk of the action for 2.0 has gone. We’ve got a slew of improvements to the RESTful lifestyle. First, we’ve dropped the semicolon for custom methods instead of the regular slash. So /people/1;edit is now /people/1/edit. We’ve also added the namespace feature to routing resources that makes it really easy to confine things like admin interfaces:

map.namespace(:admin) do |admin|
  admin.resources :products,
    :collection => { :inventory => :get },
    :member     => { :duplicate => :post },
    :has_many   => [ :tags, :images, :variants ]

Which will give you named routes like inventory_admin_products_url and admin_product_tags_url. To keep track of this named routes proliferation, we’ve added the “rake routes” task, which will list all the named routes created by routes.rb.

We’ve also instigated a new convention that all resource-based controllers will be plural by default. This allows a single resource to be mapped in multiple contexts and still refer to the same controller. Example:

  # /avatars/45 => AvatarsController#show
  map.resources :avatars

  # /people/5/avatar => AvatarsController#show 
  map.resources :people, :has_one => :avatar

Action Pack: Multiview

Alongside the improvements for resources come improvements for multiview. We already have #respond_to, but we’ve taken it a step further and made it dig into the templates. We’ve separated the format of the template from its rendering engine. So show.rhtml now becomes show.html.erb, which is the template that’ll be rendered by default for a show action that has declared format.html in its respond_to. And you can now have something like show.csv.erb, which targets text/csv, but also uses the default ERB renderer.

So the new format for templates is action.format.renderer. A few examples:

  • show.erb: same show template for all formats
  • index.atom.builder: uses the Builder format, previously known as rxml, to render an index action for the application/atom+xml mime type
  • edit.iphone.haml: uses the custom HAML template engine (not included by default) to render an edit action for the custom Mime::IPHONE format

Speaking of the iPhone, we’ve made it easier to declare “fake” types that are only used for internal routing. Like when you want a special HTML interface just for an iPhone. All it takes is something like this:

# should go in config/initializers/mime_types.rb Mime.register_alias "text/html", :iphone class ApplicationController < ActionController::Base before_filter :adjust_format_for_iphone private def adjust_format_for_iphone if request.env["HTTP_USER_AGENT"] && request.env["HTTP_USER_AGENT"][/(iPhone|iPod)/] request.format = :iphone end end end class PostsController < ApplicationController def index respond_to do |format| format.html # renders index.html.erb format.iphone # renders index.iphone.erb end end end

You’re encouraged to declare your own mime-type aliases in the config/initializers/mime_types.rb file. This file is included by default in all new applications.

Action Pack: Record identification

Piggy-backing off the new drive for resources are a number of simplifications for controller and view methods that deal with URLs. We’ve added a number of conventions for turning model classes into resource routes on the fly. Examples:

  # person is a Person object, which by convention will 
  # be mapped to person_url for lookup
  link_to(person.name, person)

Action Pack: HTTP Loving

As you might have gathered, Action Pack in Rails 2.0 is all about getting closer with HTTP and all its glory. Resources, multiple representations, but there’s more. We’ve added a new module to work with HTTP Basic Authentication, which turns out to be a great way to do API authentication over SSL. It’s terribly simple to use. Here’s an example (there are more in ActionController::HttpAuthentication):

class PostsController < ApplicationController USER_NAME, PASSWORD = "dhh", "secret" before_filter :authenticate, :except => [ :index ] def index render :text => "Everyone can see me!" end def edit render :text => "I'm only accessible if you know the password" end private def authenticate authenticate_or_request_with_http_basic do |user_name, password| user_name == USER_NAME && password == PASSWORD end end end

We’ve also made it much easier to structure your JavaScript and stylesheet files in logical units without getting clobbered by the HTTP overhead of requesting a bazillion files. Using javascript_include_tag(:all, :cache => true) will turn public/javascripts/.js into a single public/javascripts/all.js file in production, while still keeping the files separate in development, so you can work iteratively without clearing the cache.

Along the same lines, we’ve added the option to cheat browsers who don’t feel like pipelining requests on their own. If you set ActionController::Base.asset_host = “assets%d.example.com”, we’ll automatically distribute your asset calls (like image_tag) to asset1 through asset4. That allows the browser to open many more connections at a time and increases the perceived speed of your application.

Action Pack: Security

Making it even easier to create secure applications out of the box is always a pleasure and with Rails 2.0 we’re doing it from a number of fronts. Most importantly, we now ship we a built-in mechanism for dealing with CRSF attacks. By including a special token in all forms and Ajax requests, you can guard from having requests made from outside of your application. All this is turned on by default in new Rails 2.0 applications and you can very easily turn it on in your existing applications using ActionController::Base.protect_from_forgery (see ActionController::RequestForgeryProtection for more).

We’ve also made it easier to deal with XSS attacks while still allowing users to embed HTML in your pages. The old TextHelper#sanitize method has gone from a black list (very hard to keep secure) approach to a white list approach. If you’re already using sanitize, you’ll automatically be granted better protection. You can tweak the tags that are allowed by default with sanitize as well. See TextHelper#sanitize for details.

Finally, we’ve added support for HTTP only cookies. They are not yet supported by all browsers, but you can use them where they are.

Action Pack: Exception handling

Lots of common exceptions would do better to be rescued at a shared level rather than per action. This has always been possible by overwriting rescue_action_in_public, but then you had to roll out your own case statement and call super. Bah. So now we have a class level macro called rescue_from, which you can use to declaratively point certain exceptions to a given action. Example:

  class PostsController < ApplicationController
    rescue_from User::NotAuthorized, :with => :deny_access

      def deny_access

Action Pack: Cookie store sessions

The default session store in Rails 2.0 is now a cookie-based one. That means sessions are no longer stored on the file system or in the database, but kept by the client in a hashed form that can’t be forged. This makes it not only a lot faster than traditional session stores, but also makes it zero maintenance. There’s no cron job needed to clear out the sessions and your server won’t crash because you forgot and suddenly had 500K files in tmp/session.

This setup works great if you follow best practices and keep session usage to a minimum, such as the common case of just storing a user_id and a the flash. If, however, you are planning on storing the nuclear launch codes in the session, the default cookie store is a bad deal. While they can’t be forged (so is_admin = true is fine), their content can be seen. If that’s a problem for your application, you can always just switch back to one of the traditional session stores (but first investigate that requirement as a code smell).

Action Pack: New request profiler

Figuring out where your bottlenecks are with real usage can be tough, but we just made it a whole lot easier with the new request profiler that can follow an entire usage script and report on the aggregate findings. You use it like this:

$ cat login_session.rb get_with_redirect '/' say "GET / => #{path}" post_with_redirect '/sessions', :username => 'john', :password => 'doe' say "POST /sessions => #{path}" $ ./script/performance/request -n 10 login_session.rb

And you get a thorough breakdown in HTML and text on where time was spent and you’ll have a good idea on where to look for speeding up the application.

Action Pack: Miscellaneous

Also of note is AtomFeedHelper, which makes it even simpler to create Atom feeds using an enhanced Builder syntax. Simple example:

  # index.atom.builder:
  atom_feed do |feed|
    feed.title("My great blog!")

    for post in @posts
      feed.entry(post) do |entry|
        entry.content(post.body, :type => 'html')

        entry.author do |author|

We’ve made a number of performance improvements, so asset tag calls are now much cheaper and we’re caching simple named routes, making them much faster too.

Finally, we’ve kicked out in_place_editor and autocomplete_for into plugins that live on the official Rails SVN.

Active Record: Performance

Active Record has seen a gazillion fixes and small tweaks, but it’s somewhat light on big new features. Something new that we have added, though, is a very simple Query Cache, which will recognize similar SQL calls from within the same request and return the cached result. This is especially nice for N+1 situations that might be hard to handle with :include or other mechanisms. We’ve also drastically improved the performance of fixtures, which makes most test suites based on normal fixture use be 50-100% faster.

Active Record: Sexy migrations

There’s a new alternative format for declaring migrations in a slightly more efficient format. Before you’d write:

create_table :people do |t|
  t.column, "account_id",  :integer
  t.column, "first_name",  :string, :null => false
  t.column, "last_name",   :string, :null => false
  t.column, "description", :text
  t.column, "created_at",  :datetime
  t.column, "updated_at",  :datetime

Now you can write:

create_table :people do |t|
  t.integer :account_id
  t.string  :first_name, :last_name, :null => false
  t.text    :description

Active Record: Foxy fixtures

The fixtures in Active Record has taken a fair amount of flak lately. One of the key points in that criticism has been the work with declaring dependencies between fixtures. Having to relate fixtures through the ids of their primary keys is no fun. That’s been addressed now and you can write fixtures like this:

  # sellers.yml
    name: Shopify

  # products.yml
    seller: shopify
    name: Pimp cup

As you can see, it’s no longer necessary to declare the ids of the fixtures and instead of using seller_id to refer to the relationship, you just use seller and the name of the fixture.

Active Record: XML in, JSON out

Active Record has supported serialization to XML for a while. In 2.0 we’ve added deserialization too, so you can say Person.new.from_xml(“
David“) and get what you’d expect. We’ve also added serialization to JSON, which supports the same syntax as XML serialization (including nested associations). Just do person.to_json and you’re ready to roll.

Active Record: Shedding some weight

To make Active Record a little leaner and meaner, we’ve removed the acts_as_XYZ features and put them into individual plugins on the Rails SVN repository. So say you’re using acts_as_list, you just need to do ./script/plugin install acts_as_list and everything will move along like nothing ever happened.

A little more drastic, we’ve also pushed all the commercial database adapters into their own gems. So Rails now only ships with adapters for MySQL, SQLite, and PostgreSQL. These are the databases that we have easy and willing access to test on. But that doesn’t mean the commercial databases are left out in the cold. Rather, they’ve now been set free to have an independent release schedule from the main Rails distribution. And that’s probably a good thing as the commercial databases tend to require a lot more exceptions and hoop jumping on a regular basis to work well.

The commercial database adapters now live in gems that all follow the same naming convention: activerecord-XYZ-adapter. So if you gem install activerecord-oracle-adapter, you’ll instantly have Oracle available as an adapter choice in all the Rails applications on that machine. You won’t have to change a single line in your applications to take use of it.

That also means it’ll be easier for new database adapters to gain traction in the Rails world. As long as you package your adapter according to the published conventions, users just have to install the gem and they’re ready to roll.

Active Record: with_scope with a dash of syntactic vinegar

ActiveRecord::Base.with_scope has gone protected to discourage people from misusing it in controllers (especially in filters). Instead, it’s now encouraged that you only use it within the model itself. That’s what it was designed for and where it logically remains a good fit. But of course, this is all about encouraging and discouraging. If you’ve weighed the pros and the cons and still want to use with_scope outside of the model, you can always call it through .send(:with_scope).

ActionWebService out, ActiveResource in

It’ll probably come as no surprise that Rails has picked a side in the SOAP vs REST debate. Unless you absolutely have to use SOAP for integration purposes, we strongly discourage you from doing so. As a naturally extension of that, we’ve pulled ActionWebService from the default bundle. It’s only a gem install actionwebservice away, but it sends an important message none the less.

At the same time, we’ve pulled the new ActiveResource framework out of beta and into the default bundle. ActiveResource is like ActiveRecord, but for resources. It follows a similar API and is configured to Just Work with Rails applications using the resource-driven approach. For example, a vanilla scaffold will be accessible by ActiveResource.


There’s not all that much new in ActiveSupport. We’ve a host of new methods like Array#rand for getting a random element from an array, Hash#except to filter down a hash from undesired keys and lots of extensions for Date. We also made testing a little nicer with assert_difference. Short of that, it’s pretty much just fixes and tweaks.

Action Mailer

This is a very modest update for Action Mailer. Besides a handful of bug fixes, we’ve added the option to register alternative template engines and assert_emails to the testing suite, which works like this:

  1. Assert number of emails delivered within a block:
    assert_emails 1 do
    post :signup, :name => ‘Jonathan’


Rails: The debugger is back

To tie it all together, we have a stream of improvements for Rails in general. My favorite amongst these is the return of the breakpoint in form of the debugger. It’s a real debugger too, not just an IRB dump. You can step back and forth, list your current position, and much more. It’s all coming from the gracious note of the ruby-debug gem. So you’ll have to install that for the new debugger to work.

To use the debugger, you just install the gem, put “debugger” somewhere in your application, and then start the server with—debugger or -u. When the code executes the debugger command, you’ll have it available straight in the terminal running the server. No need for script/breakpointer or anything else. You can use the debugger in your tests too.

Rails: Clean up your environment

Before Rails 2.0, config/environment.rb files every where would be clogged with all sorts of one-off configuration details. Now you can gather those elements in self-contained files and put them under config/initializers and they’ll automatically be loaded. New Rails 2.0 applications ship with two examples in form of inflections.rb (for your own pluralization rules) and mime_types.rb (for your own mime types). This should ensure that you need to keep nothing but the default in config/environment.rb.

Rails: Easier plugin order

Now that we’ve yanked out a fair amount of stuff from Rails and into plugins, you might well have other plugins that depend on this functionality. This can require that you load, say, acts_as_list before your own acts_as_extra_cool_list plugin in order for the latter to extend the former.

Before, this required that you named all your plugins in config.plugins. Major hassle when all you wanted to say was “I only care about acts_as_list being loaded before everything else”. Now you can do exactly that with config.plugins = [ :acts_as_list, :all ].

And hundreds upon hundreds of other improvements

What I’ve talked about above is but a tiny sliver of the full 2.0 package. We’ve got literally hundreds of bug fixes, tweaks, and feature enhancements crammed into Rails 2.0. All this coming off the work of tons of eager contributors working tirelessly to improve the framework in small, but important ways.

I encourage you to scourger the CHANGELOGs and learn more about all that changed.

Click here to read the full post on the Ruby on Rails blog.

There are a lot of big changes here that should be useful when developing using Rails. One of my personal favorites is the ability to change XML into JSON, which being someone that likes JSON could come in handy (especially if you are getting data from a web service that is using XML and your application needs to use JSON). I look forward to seeing what new applications will be built on Rails 2.0.

Special thanks to thegreatone who submitted the Rails 2.0 post from the Ruby on Rails blog on Social Ajaxonomy.
Click here to see the post on Social Ajaxonomy

If you would like to submit a post for chance to have us blog about it click here to go to Social Ajaxonomy or click on the "Social" link located at the top link navigation of Ajaxonomy.


Convert RSS to JSON


John Resig has written another great coding example. The code takes an RSS Feed and converts it into JSON. You will also notice that in the code he uses DOM manipulation instead of eval (you can read my post on using JSON without eval by clicking here) to bring the code into the JavaScript.

Below is an excerpt from the post.


This script currently has a REST interface, accessible via a GET request. The full request would look something like this:
GET http://ejohn.org/apps/rss2json/?url=URL&callback=CALLBACK
the URL parameter would contain the URL of the RSS/Atom feed which you are attempting to convert. The optional Callback parameter would reference a callback function that you wish to have called, with the new data.

You can test this out by visiting the following URL:
Sample Code and Demo

A simple, sample, program would look something like this:
getRSS("http://digg.com/rss/index.xml", handleRSS);

function handleRSS(rss) {
alert( "Dowloaded: " + rss.title );

function getRSS(url, callback) {
feedLoaded = callback;
var script = document.createElement('script');
script.type = 'text/javascript';
script.src = "http://ejohn.org/apps/rss2json/?url=" + url
+ "&callback=feedLoaded&t=" + (new Date()).getTime();

Click here to read the full post. The post also contains the back end code as well as the above.

This idea could be used for any XML web service that you would like to have in JSON. So, if you have an application that uses JSON and the data you need is in XML try extending this code to meet your needs.

Facebook Application Development with CakePHP


Facebook is one of the largest social networks and has become a great platform to develop for. I have found a good tutorial on developing an application from Facebook Developer (click here to read the original tutorial)

Below are the steps for the tutorial.

Step 1. The first step is to create a . In setting up, only a few of the settings are vital for getting started.

* Include an application name of your preference.
* Within the “Optional Fields” section, provide the callback URL where you intend to host the application. The callback URL should point to the root path of your CakePHP installation.
* Enter a Canvas Page URL that has not already been taken, and make sure FBML is selected.
* If you do not want friends or other random users installing your application yet, make sure to check off the Developer Mode box.
* Lastly, set the Side Nav URL to the same value as the Canvas Page URL.

Step 2. Now that your application is set up, you’ll need to download and unzip the latest stable version of CakePHP.

Step 3. Create a new folder within your CakePHP application under “/app/vendor/facebook”. Download the latest version of the Facebook Platform API, and unzip the contents of the “client” folder into the new “facebook” folder created a moment ago. (PHP4 users should unzip the contents of the “php4client” folder instead.) You should now have the directory contents as follows.

* /app/vendors/facebook/facebook.php
* /app/vendors/facebook/facebook_desktop.php
* /app/vendors/facebook/facebookapi_php5_restlib.php

Step 4. You’ll now need to modify the AppController base class such that all your inherited controllers utilize the Facebook API. To start, copy app_controller.php from “/cake” to “/app”. Next, open the file up in your preferred text editor, and change its contents to match the following. (Make sure you change the values for the Facebook API key and secret in the process.)
view plaincopy to clipboardprint?


class AppController extends Controller {
var $facebook;

var $__fbApiKey = 'YOUR_API_KEY';
var $__fbSecret = 'YOUR_SECRET_KEY';

function __construct() {

// Prevent the 'Undefined index: facebook_config' notice from being thrown.
$GLOBALS['facebook_config']['debug'] = NULL;

// Create a Facebook client API object.
$this->facebook = new Facebook($this->__fbApiKey, $this->__fbSecret);

Step 5. Create a basic controller class that inherits the AppController defined above. Here we’ll perform basic Facebook calls such as logging in. Additionally, an example view named “index” is included, representing the index page of the things controller.
view plaincopy to clipboardprint?

class ThingsController extends AppController {
var $user;

* Name: beforeFilter
* Desc: Performs necessary steps and function calls prior to executing
* any view function calls.
function beforeFilter() {
$this->user = $this->facebook->require_login();

* Name: index
* Desc: Display the friends index page.
function index() {
// Retrieve the user's friends and pass them to the view.
$friends = $this->facebook->api_client->friends_get();
$this->set('friends', $friends);

Step 6. Create a placeholder model object under “/app/model” named thing.php. Again, place the following contents into the new file.
view plaincopy to clipboardprint?

class Thing extends AppModel {
var $name = 'Thing';
var $useTable = false;

Step 7. In order to ensure consistency between pages, you’ll want to create a default layout. This is a place to include header and footer FBML. Create a new document in your text editor named “/app/views/layouts/default.thtml”, and insert some code such as the following. The vital part that must be included is the echo call to print the $content_for_layout variable.
view plaincopy to clipboardprint?

<fb:google-analytics uacct="YOUR-GOOGLE-ID"/>

<style type="text/css">
.container { padding:10px; }

<fb:action href="things">My Things
<fb:action href="things/browse">Browse Things
<fb:action href="things/search">Search Things
<fb:create-button href="things/add">Add Things

<div class="container"><?php echo $content_for_layout;?></div>

Step 8. Finally, you need to create a file that represents the layout of the index view defined as a function of ThingsController from step 5. Create a new file named “/app/views/things/index.thtml”, and insert the below contents. Note the use of the $friends variable, which was passed from the index function via a call to the controller’s set function.
view plaincopy to clipboardprint?

<p><b>My Things</b></p>
<p>My Friends:</p>
<?php foreach ($friends as $friend):?>
<li><?php echo $friend;?></li>
<?php endforeach;?>

Step 9. The last step is to upload your cake application to your server (making sure to match the callback URL path set for your application). You can now access the page via: http://apps.facebook.com/YOUR-APP-PATH/things.

Now that you have an easy tutorial I would love to hear about any good projects that you have created for Facebook (perhaps we should make an Ajaxonomy Facebook application and of course blog about the development). You can put them in the comments or if they are blogged about put them in social for a chance to have an article written in the main Ajaxonomy blog.

Syndicate content