Web 2.0

Ajaxonomy's Digg Bury Recorder Version Beta 0.2 Released

Tagged:  

We are proud to announce the release of Ajaxonomy's Digg Bury Recorder version Beta 0.2. This new release has a number of new features and fixes some issues that occurred with version Beta 0.1. Below is the list of new features and bug fixes.

New Features

  • Captures all buries for all stories
  • Graphs diggs to buries

Big Fixes

  • Recorder not capturing all buries

The first new feature above is one of my favorites. The tool now captures buries from all stories posted to Digg. The great thing is that you no longer have to leave the application running over night and you won't miss a bury because the application was not started!

The second feature shows a graph of diggs and buries. So, now you can see a graph showing how many diggs you had at the time of each bury. This was created using Google's Chart API.

For fixes this release fixes a bug that was causing some buries to not be captured. This was due to server delays and browser issues. The fix has been tested and appears to be working, but if you notice an issue contact us using the Contact button on this blog.

Well, that is a run down of the new features and fixes with this release, so please let us know of any bugs you find. Also, if you find any interesting information using the tool you can sign up for a free account on this blog and write a post like this one that has a chance to appear on the home page of Ajaxonomy.com (a.k.a. this blog).

In case you don't have the link to application click here to be taken to the application.

Update:
It has been pointed out that the application does not capture 100% of Digg bury data. It does capture 100% of Digg Spy's bury data, which captures all buries for upcoming and popular stories. However, it does not capture buries from stories that have been fully buried as these can no longer make the Digg home or popular pages. So the tool will show you the buries that matter (the buries that keep you off the home page).

Thanks to everyone that pointed out this clarification.

The Weight of a Bury

Tagged:  

You may have seen my last post about the launch of the Bury Recorder (click here to ready the original post), well, to fully test the application I ran the Digg story of the post through the Bury Recorder application and I got some interesting results.

The story was fully buried (meaning it doesn't show in the upcoming, popular or hot sections of Digg) with 68 diggs. What was incredible was that it was only buried 4 times! Another thing that seems incredible is that the story as of the time of this writing has 107 diggs after being submitted 22 hours ago (when you think about it this is kind of amazing in itself as it was buried way back at 68 diggs). Another interesting this is that the first two buries where between 9:00pm and 11:00pm while the last two where between 5:00am and 7:00am. I'm not sure if this means anything, but it seams interesting how they where spaced. This ratio seems incredible to me that a bury would be given so much weight. The below image shows the actual data from the application.

I'm not sure if this is a fluke and if it would normally take a larger ratio (I know the algorithm changes depending on who dugg the story and the time of day etc..) to fully bury a story, so I'll be using this on some other Digg stories and write about my findings in the near future.

The nice thing about having the application is that we can now find out information like this which will at least take the mystery out of having a story buried on Digg.

While there's no current way to accurately determine the weight of a bury by using this recorder we may be able to gain future insight on the patterns of buried stories (number of buries required to fully bury the story, bury frequency, bury reasons, etc...). This could possibly help combat the rumored Bury Brigade.

You can go directly to the Bury Recorder application by clicking here.

Get Insight into Digg's Bury System with Ajaxonomy's Bury Recorder

Tagged:  

If you have been using the popular service Digg you know that it is very easy to submit a story and to see it start to gain traction just to be buried into the dark abyss. What I find particularly frustrating is that you don't know how many people buried the story and the reason for the bury. If you have seen Digg Spy you have noticed that the application does show buries, but you can't just track data for a particular story.

After much frustration Ajaxonomy is now releasing a Bury Recorder application. How the application works is you take the story's URL (This is the URL of the page that the "more" link on the Digg upcoming/popular pages takes you or the page that clicking on the story title takes from your profile i.e. http://digg.com/[story]) and put it into the application and once you click "Watch for Buries" the application will start recording any buries that the story receives. This will allow you to see if your story had 100 diggs and 5 buries before it was permanently buried, or if it was more like 100 diggs and 300 buries. The idea is that you would submit a story and then have the recorder capture any buries from the time that you start the application watching for buries. You'll want to note that in this Beta 1.0 release, so currently you have to leave your machine on and the application open in order to make sure that it continues to capture buries.

Before I go a bit into the design and more information on using the application I wanted to say that the application is open source and can be changed and put on your server. If you do change it and/or put it on a different server then we just ask for a link back to us and credit us for the initial creation of the application. Also, if you do decide to put it on a server, let us know and we might link to your server as another option to elevate traffic concerns on our server.

So, now that I have you are excited you will want the link to the application. Click here to be taken to Bury Counter Application.

The following is a quick overview on how to use the application so that it will make it a bit less confusing to use (more than likely most people could figure it out, but this way if it looks like it isn't working you have somewhere to look).

Using the application is as easy as one two three, however there are two ways to use the application below is the first way of using the application.

  1. Open the application (once again the link to the application is here)
  2. Copy and paste the URL of the story into the text box (i.e. http://digg.com/[story])
  3. Click the "Watch for Buries" button and then let the application start recoding buries (make sure not to close the application or to turn off/hibernate your computer)

The other way to use the application is as easy as one two (yep, there is no three using this method). Before using the below steps you will need to create a bookmarklet which can be done by following the directions at the bottom of the application.

  1. Click on the bookmarklet from the story page on Digg (this has to be the page that you get when you click on the "more" link in Digg [or from your profile page it would be the page that clicking on the title of the story takes you] which is the page that you would use to get the URL for the first method)
  2. Click the "Watch for Buries" button and then let the application start recoding buries (make sure not to close the application or to turn off/hibernate your computer)

Now that you know how to use the application I will go a bit into how the application was created. This application gets the JSON Feed used by Digg Spy. It does this using Ajax (i.e. the XMLHTTPRequest object) which requires a server side proxy due to domain security restrictions. Due to the way that the JSON is returned from Digg Spy, it doesn't set a variable equal to the returned object, which force us to use the before mentioned server side proxy and an eval statement instead of using DOM manipulation. The application simply polls for updated data every 20 seconds which makes sure we don't miss any data and that it doesn't put too much strain on the server.

You can download the full source code for this Beta 1.0 release here.

This release has been tested in Firefox 2.0.0.11 and Internet Explorer 7. It should work in many more browsers, but has not yet been tested. If you try it in a different browser and either find bugs or the application works perfectly then we would appreciate if you contact us regarding your testing results.

Also, if you do any cool things with the application or if you have any cool ideas then feel free to blog about it on this blog. Just sign up for a free account and once you login click on "Create Content" => "Blog Entry" and then write your post. If the admins of this site feel that your post is an interesting one they will move it to the home page.

I hope that you find this application useful and that you keep checking for new version and improvements.

If You Bury a Digg Story at Least Comment

Tagged:  

I have been posting to Digg for a while now, however recently my Diggs seem to continually be buried. Is this the rumored Bury Brigade? I don't have any way of knowing why these stories have been buried, they are not spam posts and are interesting original content. It seems like whenever I post a story and it gets in the "Hot" section that it gets buried soon after it starts to raise.

Since Digg gives me no way of knowing who or why my stories are being buried they give me no chance of knowing what I should do differently or a way to appeal that it was buried. All I want from people that bury my stories is the courtesy of leaving a comment that lets me know why they are burying the story. Kevin Rose some visibility would be very helpful.

So please if you bury my stories just leave a comment, so I know what you think I did wrong.

Videobox 1.1

Tagged:  

If you are familiar with lightbox you know how useful a tool it is. While with lightbox you you can easily display images, with Videobox you can do the same thing only with videos.

Videobox was written using mootools and uses the swfobject to embed flash videos. Videobox works great for videos from YouTube, but also works with other services.

Click here to go to the official page on SourceForge. The page includes code directions and demos of the library.

Check out Videobox and let us know about any cool sites or applications that you build with Videobox.

Cross-Domain XML Access - a.k.a. Server Side Proxy

Tagged:  

In developing Ajax applications there are quite a few times that you may need to consume some type of XML from a different domain (especially in the case of mashups). Since retrieving XML from a different domain through the browser is a security restriction you have to find another way of getting to the XML. One way to get around this is to use JSON data however many API's are only available in XML, so it is very possible that XML will be your only option.

In order to get around these security restrictions we need to use a server side proxy. How this works is that the server retrieves the XML from the other domain (which is not stopped by security restrictions) and then returns it to the Ajax application using the XMHTTPRequest object. This tricks the browser into thinking that the XML comes from the same domain.

So without further delay below is some PHP code that I have used to create a server side proxy. You just need to pass in the URL using the proxy_url variable that is passed to the PHP through the URL.

<?php
//          FILE: proxy.php
//
// LAST MODIFIED: 2006-03-23
//
//        AUTHOR: Troy Wolf 
//
//   DESCRIPTION: Allow scripts to request content they otherwise may not be
//                able to. For example, AJAX (XmlHttpRequest) requests from a
//                client script are only allowed to make requests to the same
//                host that the script is served from. This is to prevent
//                "cross-domain" scripting. With proxy.php, the javascript
//                client can pass the requested URL in and get back the
//                response from the external server.
//
//         USAGE: "proxy_url" required parameter. For example:
//                http://www.mydomain.com/proxy.php?proxy_url=http://www.yahoo.com
//

// proxy.php requires Troy's class_http. http://www.troywolf.com/articles
// Alter the path according to your environment.
require_once("class_http.php");

$proxy_url = isset($_GET['proxy_url'])?$_GET['proxy_url']:false;
if (!$proxy_url) {
    header("HTTP/1.0 400 Bad Request");
    echo "proxy.php failed because proxy_url parameter is missing";
    exit();
}

// Instantiate the http object used to make the web requests.
// More info about this object at www.troywolf.com/articles
if (!$h = new http()) {
    header("HTTP/1.0 501 Script Error");
    echo "proxy.php failed trying to initialize the http object";
    exit();
}

$h->url = $proxy_url;
$h->postvars = $_POST;
if (!$h->fetch($h->url)) {
    header("HTTP/1.0 501 Script Error");
    echo "proxy.php had an error attempting to query the url";
    exit();
}

// Forward the headers to the client.
$ary_headers = split("\n", $h->header);
foreach($ary_headers as $hdr) { header($hdr); }

// Send the response body to the client.
echo $h->body;
?>

Below is the class_http that is referenced in the above.

<?php
/*
* Filename.......: class_http.php
* Author.........: Troy Wolf [troy@troywolf.com]
* Last Modified..: Date: 2006/03/06 10:15:00
* Description....: Screen-scraping class with caching. Includes image_cache.php
                   companion script. Includes static methods to extract data
                   out of HTML tables into arrays or XML. Now supports sending
                   XML requests and custom verbs with support for making
                   WebDAV requests to Microsoft Exchange Server.
*/

class http {
    var $log;
    var $dir;
    var $name;
    var $filename;
    var $url;
    var $port;
    var $verb;
    var $status;
    var $header;
    var $body;
    var $ttl;
    var $headers;
    var $postvars;
    var $xmlrequest;
    var $connect_timeout;
    var $data_ts;
    
    /*
    The class constructor. Configure defaults.
    */
    function http() {
        $this->log = "New http() object instantiated.<br />\n";
        
        /*
        Seconds to attempt socket connection before giving up.
        */
        $this->connect_timeout = 30;
        
        /*
        Set the 'dir' property to the directory where you want to store the cached
        content. I suggest a folder that is not web-accessible.
        End this value with a "/".
        */
        $this->dir = realpath("./")."/"; //Default to current dir.

        $this->clean();               

        return true;
    }
    
    /*
    fetch() method to get the content. fetch() will use 'ttl' property to
    determine whether to get the content from the url or the cache.
    */
    function fetch($url="", $ttl=0, $name="", $user="", $pwd="", $verb="GET") {
        $this->log .= "--------------------------------<br />fetch() called
\n"; $this->log .= "url: ".$url."<br />\n"; $this->status = ""; $this->header = ""; $this->body = ""; if (!$url) { $this->log .= "OOPS: You need to pass a URL!<br />"; return false; } $this->url = $url; $this->ttl = $ttl; $this->name = $name; $need_to_save = false; if ($this->ttl == "0") { if (!$fh = $this->getFromUrl($url, $user, $pwd, $verb)) { return false; } } else { if (strlen(trim($this->name)) == 0) { $this->name = MD5($url); } $this->filename = $this->dir."http_".$this->name; $this->log .= "Filename: ".$this->filename."<br />"; $this->getFile_ts(); if ($this->ttl == "daily") { if (date('Y-m-d',$this->data_ts) != date('Y-m-d',time())) { $this->log .= "cache has expired<br />"; if (!$fh = $this->getFromUrl($url, $user, $pwd, $verb)) { return false; } $need_to_save = true; if ($this->getFromUrl()) { return $this->saveToCache(); } } else { if (!$fh = $this->getFromCache()) { return false; } } } else { if ((time() - $this->data_ts) >= $this->ttl) { $this->log .= "cache has expired<br />"; if (!$fh = $this->getFromUrl($url, $user, $pwd, $verb)) { return false; } $need_to_save = true; } else { if (!$fh = $this->getFromCache()) { return false; } } } } /* Get response header. */ $this->header = fgets($fh, 1024); $this->status = substr($this->header,9,3); while ((trim($line = fgets($fh, 1024)) != "") && (!feof($fh))) { $this->header .= $line; if ($this->status=="401" and strpos($line,"WWW-Authenticate: Basic realm=\"")===0) { fclose($fh); $this->log .= "Could not authenticate<br />\n"; return FALSE; } } /* Get response body. */ while (!feof($fh)) { $this->body .= fgets($fh, 1024); } fclose($fh); if ($need_to_save) { $this->saveToCache(); } return $this->status; } /* PRIVATE getFromUrl() method to scrape content from url. */ function getFromUrl($url, $user="", $pwd="", $verb="GET") { $this->log .= "getFromUrl() called<br />"; preg_match("~([a-z]*://)?([^:^/]*)(:([0-9]{1,5}))?(/.*)?~i", $url, $parts); $protocol = $parts[1]; $server = $parts[2]; $port = $parts[4]; $path = $parts[5]; if ($port == "") { if (strtolower($protocol) == "https://") { $port = "443"; } else { $port = "80"; } } if ($path == "") { $path = "/"; } if (!$sock = @fsockopen(((strtolower($protocol) == "https://")?"ssl://":"").$server, $port, $errno, $errstr, $this->connect_timeout)) { $this->log .= "Could not open connection. Error " .$errno.": ".$errstr."<br />\n"; return false; } $this->headers["Host"] = $server.":".$port; if ($user != "" && $pwd != "") { $this->log .= "Authentication will be attempted<br />\n"; $this->headers["Authorization"] = "Basic ".base64_encode($user.":".$pwd); } if (count($this->postvars) > 0) { $this->log .= "Variables will be POSTed<br />\n"; $request = "POST ".$path." HTTP/1.0\r\n"; $post_string = ""; foreach ($this->postvars as $key=>$value) { $post_string .= "&".urlencode($key)."=".urlencode($value); } $post_string = substr($post_string,1); $this->headers["Content-Type"] = "application/x-www-form-urlencoded"; $this->headers["Content-Length"] = strlen($post_string); } elseif (strlen($this->xmlrequest) > 0) { $this->log .= "XML request will be sent<br />\n"; $request = $verb." ".$path." HTTP/1.0\r\n"; $this->headers["Content-Length"] = strlen($this->xmlrequest); } else { $request = $verb." ".$path." HTTP/1.0\r\n"; } #echo "<br />request: ".$request; if (fwrite($sock, $request) === FALSE) { fclose($sock); $this->log .= "Error writing request type to socket<br />\n"; return false; } foreach ($this->headers as $key=>$value) { if (fwrite($sock, $key.": ".$value."\r\n") === FALSE) { fclose($sock); $this->log .= "Error writing headers to socket<br />\n"; return false; } } if (fwrite($sock, "\r\n") === FALSE) { fclose($sock); $this->log .= "Error writing end-of-line to socket<br />\n"; return false; } #echo "
post_string: ".$post_string; if (count($this->postvars) > 0) { if (fwrite($sock, $post_string."\r\n") === FALSE) { fclose($sock); $this->log .= "Error writing POST string to socket<br />\n"; return false; } } elseif (strlen($this->xmlrequest) > 0) { if (fwrite($sock, $this->xmlrequest."\r\n") === FALSE) { fclose($sock); $this->log .= "Error writing xml request string to socket<br />\n"; return false; } } return $sock; } /* PRIVATE clean() method to reset the instance back to mostly new state. */ function clean() { $this->status = ""; $this->header = ""; $this->body = ""; $this->headers = array(); $this->postvars = array(); /* Try to use user agent of the user making this request. If not available, default to IE6.0 on WinXP, SP1. */ if (isset($_SERVER['HTTP_USER_AGENT'])) { $this->headers["User-Agent"] = $_SERVER['HTTP_USER_AGENT']; } else { $this->headers["User-Agent"] = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"; } /* Set referrer to the current script since in essence, it is the referring page. */ if (substr($_SERVER['SERVER_PROTOCOL'],0,5) == "HTTPS") { $this->headers["Referer"] = "https://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']; } else { $this->headers["Referer"] = "http://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']; } } /* PRIVATE getFromCache() method to retrieve content from cache file. */ function getFromCache() { $this->log .= "getFromCache() called<br />"; //create file pointer if (!$fp=@fopen($this->filename,"r")) { $this->log .= "Could not open ".$this->filename."<br />"; return false; } return $fp; } /* PRIVATE saveToCache() method to save content to cache file. */ function saveToCache() { $this->log .= "saveToCache() called<br />"; //create file pointer if (!$fp=@fopen($this->filename,"w")) { $this->log .= "Could not open ".$this->filename."<br />"; return false; } //write to file if (!@fwrite($fp,$this->header."\r\n".$this->body)) { $this->log .= "Could not write to ".$this->filename."<br />"; fclose($fp); return false; } //close file pointer fclose($fp); return true; } /* PRIVATE getFile_ts() method to get cache file modified date. */ function getFile_ts() { $this->log .= "getFile_ts() called<br />"; if (!file_exists($this->filename)) { $this->data_ts = 0; $this->log .= $this->filename." does not exist<br />"; return false; } $this->data_ts = filemtime($this->filename); return true; } /* Static method table_into_array() Generic function to return data array from HTML table data rawHTML: the page source needle: optional string to start parsing source from needle_within: 0 = needle is BEFORE table, 1 = needle is within table allowed_tags: list of tags to NOT strip from data, e.g. "<a><b>" */ function table_into_array($rawHTML,$needle="",$needle_within=0,$allowed_tags="") { $upperHTML = strtoupper($rawHTML); $idx = 0; if (strlen($needle) > 0) { $needle = strtoupper($needle); $idx = strpos($upperHTML,$needle); if ($idx === false) { return false; } if ($needle_within == 1) { $cnt = 0; while(($cnt < 100) && (substr($upperHTML,$idx,6) != "<TABLE")) { $idx = strrpos(substr($upperHTML,0,$idx-1),"<"); $cnt++; } } } $aryData = array(); $rowIdx = 0; /* If this table has a header row, it may use TD or TH, so check special for this first row. */ $tmp = strpos($upperHTML,"<TR",$idx); if ($tmp === false) { return false; } $tmp2 = strpos($upperHTML,"</TR>",$tmp); if ($tmp2 === false) { return false; } $row = substr($rawHTML,$tmp,$tmp2-$tmp); $pattern = "/<TH>|<TH\ |<TD>|<TD\ /"; preg_match($pattern,strtoupper($row),$matches); $hdrTag = $matches[0]; while ($tmp = strpos(strtoupper($row),$hdrTag) !== false) { $tmp = strpos(strtoupper($row),">",$tmp); if ($tmp === false) { return false; } $tmp++; $tmp2 = strpos(strtoupper($row),"</T"); $aryData[$rowIdx][] = trim(strip_tags(substr($row,$tmp,$tmp2-$tmp),$allowed_tags)); $row = substr($row,$tmp2+5); preg_match($pattern,strtoupper($row),$matches); $hdrTag = $matches[0]; } $idx = strpos($upperHTML,"</TR>",$idx)+5; $rowIdx++; /* Now parse the rest of the rows. */ $tmp = strpos($upperHTML,"<<R",$idx); if ($tmp === false) { return false; } $tmp2 = strpos($upperHTML,"</TABLE>",$idx); if ($tmp2 === false) { return false; } $table = substr($rawHTML,$tmp,$tmp2-$tmp); while ($tmp = strpos(strtoupper($table),"<TR") !== false) { $tmp2 = strpos(strtoupper($table),"</TR"); if ($tmp2 === false) { return false; } $row = substr($table,$tmp,$tmp2-$tmp); while ($tmp = strpos(strtoupper($row),"<TD") !== false) { $tmp = strpos(strtoupper($row),">",$tmp); if ($tmp === false) { return false; } $tmp++; $tmp2 = strpos(strtoupper($row),"</TD"); $aryData[$rowIdx][] = trim(strip_tags(substr($row,$tmp,$tmp2-$tmp),$allowed_tags)); $row = substr($row,$tmp2+5); } $table = substr($table,strpos(strtoupper($table),"</TR>")+5); $rowIdx++; } return $aryData; } /* Static method table_into_xml() Generic function to return xml dataset from HTML table data rawHTML: the page source needle: optional string to start parsing source from allowedTags: list of tags to NOT strip from data, e.g. "<a><b>" */ function table_into_xml($rawHTML,$needle="",$needle_within=0,$allowedTags="") { if (!$aryTable = http::table_into_array($rawHTML,$needle,$needle_within,$allowedTags)) { return false; } $xml = "<?xml version=\"1.0\" standalone=\"yes\" \?\>\n"; $xml .= "<TABLE>\n"; $rowIdx = 0; foreach ($aryTable as $row) { $xml .= "\t<ROW id=\"".$rowIdx."\">\n"; $colIdx = 0; foreach ($row as $col) { $xml .= "\t\t<COL id=\"".$colIdx."\">".trim(utf8_encode(htmlspecialchars($col)))."</COL>\n"; $colIdx++; } $xml .= "\t</ROW>\n"; $rowIdx++; } $xml .= "</TABLE>"; return $xml; } } ?>

Click here to be taken to the page where you can download all of the code above.

So now you have some code to create a server side proxy and can use it to create a great new mashup! If you know of any other good server side proxy code in PHP, Ruby, Java, .Net or any other web language please leave it in the comments.

DWR 3.0 vision

DWR 3.0 is going to be released soon. Following are vision from Joe, Founder of DWR

DWR 2.0 has been out for 6 months or so. At the time, I swore that the next release would be a small one, called 2.1. However it appears that I’m not good at swearing because there is lots in the next release - I think we’re going to have to call it 3.0.

Since 2.0, we've been working on the following adding support for JSON, Bayeux, images/binary file upload/download, a Hub with JMS/OAA support and more reverse ajax APIs. I also want to get some Gears integration going.

There are also a whole set of non-functional things to consider:
* Moving the website to directwebremoting.org
* Restart chasing CLAs, using a foundation CLA rather than a Getahead CLA
* Get some lawyer to create a CLA so Getahead can grant rights to the Foundation (or something similar)
* Get someone to pony up and let us move to SVN
* Unit tests

JSON support: One goal is a RESTian API so you can do something like this: http://example.com/dwr/json/ClassName/methodName?param1=fred;param2=jim and DWR will reply with a JSON structure containing the result of calling className.methodName("fred", "jim"); It would be good to support JSONP along with this. We might also allow POSTing of JSON structures, although I’m less convinced about this because it quickly gets DWR specific, and then what’s the point of a standard. Status - DWR has always used a superset of JSON that I like to call JavaScript. We do this to cope with recursive data, XML objects, and such like. I’ve done most of the work so that DWR can use the JSON subset, but not created the ‘handler’ to interface between the web and a JSON data structure.

Bayeux Support: Greg Wilkins (Jetty) committed some changes to DWR, which need some tweaks to get working properly. Greg still intends to complete this.

File/Image Upload and Download: This allows a Java method to return an AWT BufferedImage and have that image turn up in the page, or to take or return an InputStream and have that populated from a file upload or offered as a file download. I’ve had some bug reports that it doesn’t work with some browsers, also we need to find a way to report progress to a web page simply.

DWR Hub and integration with JMS and OpenAjax Hub: We have a hub, along with one way integration with JMS. The OpenAjax portion will be simple except for the getting the OpenAjax Hub to work smoothly with JMS part. Much of this work has not hit CVS yet, but will do soon.

Reverse Ajax Proxy API Generator: The goal with this is a program that will take JavaScript as input, and output a Java API which, when called, generates JavaScript to send to a browser. Some of this work has been tricky, but then meta-meta-programming was always bound to be hard. This currently mostly works with TIBCO GI, but more work will be needed to allow it to extract type information from other APIs.

DOM Manipulation Library: Currently this is limited to window.alert, mostly because I’m not sure how far to take it. There are a set of things like history, location, close, confirm that could be useful from a server, and that are not typically abstracted by libraries.

Gears Integration: I’ve not started this, but it needs to take higher priority than it currently does. It would be very cool if DWR would transparently detect Gears, and then allow some form of guaranteed delivery including resending of messages if the network disappears for a while.

Website: We need to get the DWR website moved away from the Getahead server, and onto Foundation servers. There will be some URLs to alter as part of this, and I don’t want to lose Google juice by doing it badly.
The documentation for DWR 2 was not up to the standards of 1.x, and while it has been getting better, we could still do more. One thing that has held this back has been lack of a DWR wiki. I hope we can fix this with the server move.

Source Repo: We are currently using CVS hosted by java.net (which is a collab.net instance - yuck). They support SVN, but want to charge me a few hundred dollars to upgrade. Maybe the Foundation can either ridicule them into submission or pay the few hundred dollars for the meta-data so we can host the repo. ourselves. The latter option is probably better.

Unit Tests: I've been trying for ages to find a way to automatically test with multiple browsers and servers. WebDriver looked good for a while, but it doesn't look like the project is going anywhere particularly quickly, so I'm back trying to get Selenium to act in a sane way.

XML versus JSON - What is Best for Your App?

Tagged:  

One of the biggest debates in Ajax development today is JSON versus XML. This is at the heart of the data end of Ajax since you usually receive JSON or XML from the server side (although these are not the only methods of receiving data). Below I will be listing pros and cons of both methods.

If you have been developing Ajax applications for any length of time you will more than likely be familiar with XML data. You also know that XML data is very powerful and that there are quite a few ways to deal with the data. One way to deal with XML data is to simply apply a XSLT style sheet to the data (I won't have time in this post to go over the inconsistent browser support of XSLT, but it is something to look into if you want to do this). This is useful if you just want to display the data. However, if you want to do something programmatically with the data (like in the instance of a web service) you will need to parse the data nodes that are returned to the XMLHTTPRequest object (this is done by going through the object tag by tag and getting the needed data). Of course there are quite a few good pre-written libraries that can make going through the XML data easier and I recommend using a good one (I won't go into depth as to what libraries I prefer here, but perhaps in a future post). One thing to note is that if you want to get XML data from another domain you will have to use a server side proxy as the browser will not allow this type of receiving data across domains.

JSON is designed to be a more programmatic way of dealing with data. JSON (JavaScript Object Notation) is designed to return data as JavaScript objects. In an Ajax application using JSON you would receive text through the XMHTTPRequest object (or by directly getting the data through the script tag which I will touch on later) and then pass that text through an eval statement or use DOM manipulation to pass it into a script tag (if you haven't already read my post on using JSON without using eval click here to read the post). The power of this is that you can use the data in JavaScript without any parsing of the text. The down side would be if you just wanted to display the data there is no easy way to do this with JSON. JSON is great for web services that are coming from different domains since if you load the data through a script tag then you can get the data without a domain constraint.

The type of data that you use for your application will depend on quite a few factors. If you are going to be using the data programmatically then in most cases JSON is the better data method to use. On the other hand if you just want to display the data returned I would recommend XML. Of course there may be other factors such as if you are using a web service, which could dictate the data method. If you are getting data from a different domain and JSON is available this may be the better choice. For Ruby on Rails developers, if you would prefer to use JSON and XML is all that is available the 2.0 release allows you to change XML into JSON. One of the biggest reasons that people use JSON is the size of the data. In most cases JSON uses a lot less data to send to your application (of course this may very depending on the data and how the XML is formed).

I would recommend that you take a good look at the application that you are building and decide based on the above which type of data you should deal with. There may be more factors than the above including corporate rules and developer experience, but the above should have given you a good idea as to when to use either data method.

If you would like to contact me regarding any of the above you can make me your friend on Social Ajaxonomy and send a message to me through the service (Click here to go to my profile on Social Ajaxonomy).

Ajaxonomy.com Launches Social Network

Tagged:  

Ajaxonomy.com is proud to present it's latest addition, Social Ajaxonomy, a social network for developers to share links in a digg-style environment.

This is where the community can submit links to articles, sites, posts, and whatever else it finds pertaining to Ajax and other interesting web technologies and further provides users with an opportunity to vote on which links they find useful and valuable. Top articles have a chance to appear in the main blog. So, we hope that you would register an account, build a profile, and start submitting and voting for links! We look forward to growing in this community with you.

Please post your comments!

OpenID 2.0 Finally Released!

Tagged:  

It is Christmas time and we web developers have just received a great gift. OpenID 2.0 has finally been released. This will hopefully allow for greater adoption of the API (especially now that both Microsoft and Google have announced their supporting of it).

Below is from the OpenID site.

While its certainly been a long process in the making, we’re now quite excited to announce OpenID Authentication 2.0 and OpenID Attribute Exchange 1.0 as final specifications (”OpenID 2.0?). This morning was the closing day of the Internet Identity Workshop and David Recordon, Dick Hardt, and Josh Hoyt (three of the authors and editors) made the announcement during the first session. Both specifications have evolved through extensive community participation and feedback and each have been stable for a number of months. There are already a variety of open source libraries shipping these specifications with product support including Google’s Blogger (via Sxip’s library) and Drupal who did their own implementation of the specifications. Multiple OpenID Providers including MyOpenID, Sxipper, and VeriSign’s PIP already have support for both of these specifications. Given past trends, growing support of OpenID 2.0 should be no different.

Click here to read the full post.

I can't wait to see what new applications are made using the OpenID 2.0.

Thanks to Macskeeball for letting us know of this news. If you would like to let us know of upcoming news you can now use our new social feature (you'll be hearing more about this soon) that will allow for you to let us know what you would like on the blog.

Syndicate content