David Hurth's blog

SaphireSteel releases new Ruby on Rails IDE based on VS 2008 Shell

Tagged:  

Ruby In Steel, an IDE for Ruby on Rails, has been released by SaphireSteel. The IDE is based on the Visual Studio 2008 Shell and will sell at a starting price of $49 or $199 for the developer edition (you can get more information here).

Below are the minimum system requirements to run the IDE.

Minimum Requirements: Windows XP (service pack 2) or Vista. Visual Studio 2008 Standard Edition or above is optional - if you don’t own Visual Studio 2008, Ruby In Steel will install a standalone Ruby-language edition of Visual Studio 2008.

You can get a list of all the application features here.

Below are a few excerpts from the InfoWorld article: Ruby on Rails IDE geared to Visual Studio 2008 users.

"We put all our support into Visual Studio so the end-user gets a Ruby-flavored edition of Visual Studio," with its attendant capabilities, Collingbourne said.

Use of the Visual Studio Shell gives SapphireSteel a chance to compete with Eclipse-based IDEs, such as CodeGear's 3rdRail, which also is billed as a Rails IDE, SapphireSteel said.

The IDE is a great tool for those that like the Visual Studio IDE and would like to develop Ruby on Rails applications.

Google Chart API Tools

Tagged:  

You have probably heard a lot about the Google Chart API. Well, there are a few tools and scripts that are available that can be useful in creating charts using the API.

The first tool is a chart creator made by Dion Almaer (you may know Dion from Ajaxian). The application is a nice little chart creation tool that was created using Ext 2.0 and the Google Chart API. The application is aptly named ChartMaker.

Below is a demo of the application:


You can read all about the application at Dion's personal blog. Also, you can go to the application by clicking here or click here to get the code.

Nice job Dion, as this application is a great use of Ext 2.0 and the Google Chart API.

The second tool that I found was posted over at Wait till I come! and is a script that takes data from a HTML table and converts it into a chart.

Below is an excerpt from the post.

Generating charts from accessible data tables and vice versa using the Google Charts API

Google have lately been praised for their chart API and my esteemed colleague Ed Eliot has a workaround for its restrictions in terms of caching and server hits.

I played around a bit with it and thought it very cool but it felt a bit clunky to add all these values to a URL when they could be in the document for those who cannot see pie charts. This is why I wrote a small script that converts data tables to charts using the API and a wee bit of JavaScript.

Using this script you can take a simple, valid and accessible data table like the following and it gets automatically converted to a pie chart.


<table class="tochart size300x100 color990000" summary="Browsers for this site, March 2007">
  <caption>Browsers</caption>
  <thead>
    <tr><th scope="col">Browser</th><th scope="col">Percent</th></tr>

  </thead>
  <tbody>
    <tr><td>Firefox</td><td>60</td></tr>
    <tr><td>MSIE</td><td>25</td></tr>

    <tr><td>Opera</td><td>10</td></tr>
    <tr><td>Safari</td><td>5</td></tr>

  </tbody>
</table>

Simply add the script to the end of the body and it’ll convert all tables with a class called “tochart”. You can define the size (widthxheight) and the colour as a hexadecimal triplet as shown in this example. If you leave size and colour out, the script will use presets you can alter as variables in the script itself.

You can view a demo of the above by clicking here and you can download the demo code by clicking here. You can read the full post here.

It is great to see some good development in the Google Chart API arena.

If know of any other cool applications that use libraries like the Google Chart API we would love to hear about them. You can leave them in the comments or if you sign up for a free account on this blog, you can blog about it on Ajaxonomy.com.

Cross-Site XMLHttpRequest in Firefox 3

Tagged:  

Over at John Resig's blog (you may know John from his work on jQuery) he has an interesting post about using the XMLHttpRequest object to get cross-domain data without a cross-domain proxy in Firefox 3 (currently in beta). The built-in cross-site XMLHttpRequest feature is new to Firefox 3.

Below is an excerpt from John's post.

In a nutshell, there are two techniques that you can use to achieve your desired cross-site-request result: Specifying a special Access-Control header for your content or including an access-control processing instruction in your XML.
More information can be found in the documentation but here's a quick peek at what your code might look like:
An HTML document (served via PHP) that specifies an Access-Control header: (Demo - FF3 Only)

<?php header('Access-Control: allow <*>'); ?>
<b>John Resig</b>

An XML document that specifies an access-control processing instruction: (Demo - FF3 Only)

<?xml version="1.0" encoding="UTF-8"?>

<?access-control allow="*"?>
<simple><name>John Resig</name></simple>

Now what's especially nice about all this is that you don't have to change a single line of your client-side code to make this work! Take, for example, this page which requests an HTML file from a remote domain - and, specifically, the JavaScript within it:

var xhr = new XMLHttpRequest();
xhr.open("GET", "http://dev.jquery.com/~john/xdomain/test.php", true);

xhr.onreadystatechange = function(){
  if ( xhr.readyState == 4 ) {

    if ( xhr.status == 200 ) {
      document.body.innerHTML = "My Name is: " + xhr.responseText;

    } else {
      document.body.innerHTML = "ERROR";

    }
  }
};
xhr.send(null);

You can read John's full post here.

As a person that loves the powers of web services from different domains (one thing I love about JSON is that by using DOM manipulation to load the code you can get around cross-domain issues without the overhead of a server side proxy) I hope that this feature catches on with more browsers as new versions of each browser is released so that it would have cross-browser support.

Choosing a Development Framework

Over at Smashing Magazine they have put together a good round up of Development Frameworks. The post goes over the most popular frameworks for each language. The post also gives tips on what framework is best for you and has a brief over view of the Model View Controller (MVC) design pattern.

Below is an excerpt from the JavaScript frameworks.

JavaScript

  • Prototype is a JavaScript framework that serves as a foundation for other JavaScript frameworks. Don’t be fooled however, as Prototype can stand on its own.

  • script.aculo.us is built on the Prototype framework and has found its way into many high-profile sites including Digg, Apple, and Basecamp. The documentation is clear, and has an easy learning curve. However, compared to other JavaScript frameworks it is larger in size.

    Script.aculo.us

  • Mootools is a compact, modular, object-oriented JavaScript framework with impressive effects and Ajax handling. The framework is for advanced users as the learning curve is rather steep.

  • jQuery continues to rise in popularity due to its extensive API and active development. jQuery is a great balance of complexity and functionality.

  • For ASP.NET developers you can’t beat the ASP.NET AJAX framework which is built into the .NET Framework as of 3.5, but you can also download it for previous versions. The amount of documentation, examples, and community continues to increase. There are controls that you can simply drag-and-drop an update panel on an ASPX page and process Ajax!

    ASP.NET

Further JavaScript Frameworks

  • The Yahoo! User Interface Library - Yahoo! released its impressed JavaScript library with incredible amounts of documentation.
  • Ext JS - Originally built as an add-on to the YUI it can now extend Prototype and jQuery. Includes an impress interface.
  • Dojo is a small library focused on interpreter independence and small core size.
  • MochiKit - A framework that has focus on scripting language standards including ECMAScript and the W3C DOM.

Click here to read the full post on the frameworks.

Forcing a File Download in PHP

Tagged:  

I found an interesting tutorial on forcing a file to be downloaded instead of opened with the associated application.

Below is the PHP code that would be called to force the doe\wnload.

<?php
session_cache_limiter('none');
session_start();

function _Download($f_location, $f_name){
    str_replace('/', $f_name); 
    str_replace('/', $f_location); 
    header ("Cache-Control: must-revalidate, post-check=0, pre-check=0");
    header('Content-Description: File Transfer');
    header('Content-Type: application/octet-stream');
    header('Content-Length: ' . filesize($f_location));    header('Content-Disposition: attachment; filename=' . basename($f_name));
    readfile($f_location);
}

$file = $_GET['file'];
$loc = $_GET['location'];

downloadFiles($loc, $file);

?>

To use this file, create a link to your download file, lets assume it is download.php. Just reference the link to download.php?file=filename.txt&location=filename.txt. Using the URL parameters the file is passed in and this then forces the download.

This is a great script to for forcing downloads of images, web pages or text files as these would otherwise be opened in the browser.

To view the full tutorial click here

John Lilly named as Mozilla's new CEO

Tagged:  

Mozilla Corporation, makers of the wonderful open source Firefox browsers (they do make other browsers as well) has just named John Lilly as their new CEO.

Below is an excerpt from Michell Baker's blog announcing the new change.

As a result I've asked John to take on the role of CEO of the Mozilla Corporation, and John has agreed. In reality John and I have been unconsciously moving towards this change for some time, as John has been providing more and more organizational leadership. It is very Mozilla-like to acknowledge the scope of someone's role after he or she has been doing it for a while, and this is a good part of what is happening here. I expect this transition to continue to be very smooth.

I will remain an active and integral part of MoCo. I've been involved in shipping Mozilla products since the dawn of time, and have no intention of distancing myself from our products or MoCo. I'll remain both as the Chairman of the Board and as an employee. My focus will shift towards the kinds of activities described above, but I'll remain deeply engaged in MoCo activities. I don't currently plan to create a new title. I have plenty of Mozilla titles already: Chairman of the Mozilla Foundation, Chairman of the Mozilla Corporation, Chief Lizard Wrangler of the project. More importantly, I hope to provide leadership in new initiatives because they are worthwhile, separate from any particular title. We will probably create an Office of the Chairman with a small set of people to work on these initiatives. I intend to remain deeply involved with MoCo precisely because I remain focused on our products and what we can accomplish within the industry.

There will be some differences with this change of roles. Most notably:

  • John's role in products and organization will become more visible to the world as he becomes more of a public voice for MoCo activities.
  • Today-- in theory at least-- John provides advice to me for a range of decisions for which I am responsible. In the future I'll provide input to John and he'll be responsible for making MoCo an effective organization. I expect to provide advice on a subset of topics and thus reduce the duplication of work. On the other hand, I also expect to be quite vocal on the topics I care about most. John and I agree on most things these days, but that doesn't stop me from being vocal :-)

I'm thrilled with this development, both with John's new role and with mine. If you've got thoughts on the kinds of projects I want to set in motion, I'm eager to hear them. And don't be surprised if you see the Mozilla Corporation doing more faster-- that's a part of the goal. We're all committed to doing things in a Mozilla style and you should expect to see that continue to shine through all that we do, whether it's shipping product or developing a new initiative.

You can read the full post at Mitchell's blog.

There should be little visible changes other than the obvious title changes. Lilly will become the spokesperson for Mozilla's activities and Mitchell will be the point woman on the new focus in the world of security standards. Hopefully the Mozilla products will continue to thrive through this change.

Graceful handling of anchors with jQuery

Over at Hainhealt.com they have an interesting post on handling anchors gracefully with jQuery.

Below is an excerpt from the post:

I've come to use this quite often which eventually leads to a considerable amount of if statements.

Which is ugly. And since I don't like ugliness, I've coded myself a small anchor handler for jQuery. Looking at the code I think I could quite easily make it compatible with the Prototype framework too, but I'll keep that for another post :D

(function(){
  url = window.location.href, handlers = [];

  jQuery.extend({
    anchorHandler: {
      add: function(regexp, callback) {

        if (typeof(regexp) == 'object') {
          jQuery.map(regexp, function(arg){

            args = {r: arg[0], cb: arg[1]};});}

        else  args = {r: regexp, cb: callback};

        handlers.push(args);
        return jQuery.anchorHandler;
      }

    }
  })(document).ready(function(){
    jQuery.map(handlers, function(handler){

      match = url.match(handler.r) && url.match(handler.r)[0] || false;

      if (match) {
      	handler.cb.apply(this, [match, (url.match(/#.*/)[0] || false)]);

      }});});
})();

And I can add triggers like this:

$.anchorHandler
  .add(/\/\#ch\-cheatsheet/,    h.comment.showCheatsheet)

  .add(/\/\#comment\-compose/,  h.comment.showCompose)
  .add(/\/\#comment\-\d+/,      h.comment.focus);

The first argument is a regular expression or a string that is passed to the function match, the second argument is the callback function.

The method also accept arrays as argument like this:

$.anchorHandler.add([
  [/\/\#ch\-cheatsheet/,   h.comment.showCheatsheet],

  [/\/\#comment\-compose/, h.comment.showCompose],
  [/\/\#comment\-\d+/,     h.comment.focus]]);

The callback function receive 2 arguments. The matched bit of the anchor with the anchor itself.

Read the full post here.

This technique is very useful for downgrading your applications on browsers where JavaScript is disabled.

Google Chart API - A Real World Example

Tagged:  

You may have seen quite a bit about the new Google Chart API, but you may not have seen a real world example. In the recent release of the Digg Bury Recorder I got some first hand practice in using th Google Chart API. When developing the application I found a few interesting nuances and knowledge of these could help you in using the API in your applications.

One of the most interesting things that I found is that the charting always graphs on a 100x100 basis starting at 0,0. This presented some issues in our application as it needed to graph things like 25 buries and 1200 diggs. The way that I solved this issue was to find the greatest number on each axis and take 100 divided by the greatest number from the points on each axis. Once you get this number you multiply each point it by the appropriate scaler for each axis (although it may not work correctly with points that are less than zero and you would have to adjust the equation to work around that).

The below is an example of an URL that would be used to create a line chart (this is taken directly from the Digg Bury Recorder).

http://chart.apis.google.com/chart?chs=300x225&chd=t:9,16,51,51,51,52,58,59,61,62,63,67,69,69,69,72,74,74,89,93,96,98,100,100|4,8,12,16,20,25,29,33,37,41,45,50,54,58,62,66,70,75,79,83,87,91,95,100&cht=lxy&chxt=x,y&chxr=0,0,155|1,0,24

The parts of this URL that I would like to point out are below:

  • chxr=0,0,155|1,0,24 - This builds the markers of the graph (in our application I used the greatest value for each axis)
  • &cht=lxy - This specifies an X, Y line chart that uses actual points (if you look at the Google Chart API documentation you will see that most examples use helloworld as the points, as each letter has a numeric value, but I prefer to just pass in numbers)
  • chd=t:9,16,51,51,51,52,58,59,61,62,63,67,69,69,69,72,74,74,89,93,96,98,100,100|4,8,12,16,20,25,29,33,37,41,45,50,54,58,62,66,70,75,79,83,87,91,95,100 

    - This is a list of all the points (scaled as I mentioned before). It is important to note that a | is used to separate each axis.

The above Google Chart API URL will result in the below graph.

So now that you've seen my real world example you can check out the official Google Chart API documentation here.

If you make any cool applications using the Google Chart API let us know about it in the comments or you can blog about it on this blog when you sign up for a free account. If your post is considered interesting by the administrators of this site it will appear on the home page, otherwise it will be on your personal blog and under the blogs link.

Ajaxonomy's Digg Bury Recorder Version Beta 0.2 Released

Tagged:  

We are proud to announce the release of Ajaxonomy's Digg Bury Recorder version Beta 0.2. This new release has a number of new features and fixes some issues that occurred with version Beta 0.1. Below is the list of new features and bug fixes.

New Features

  • Captures all buries for all stories
  • Graphs diggs to buries

Big Fixes

  • Recorder not capturing all buries

The first new feature above is one of my favorites. The tool now captures buries from all stories posted to Digg. The great thing is that you no longer have to leave the application running over night and you won't miss a bury because the application was not started!

The second feature shows a graph of diggs and buries. So, now you can see a graph showing how many diggs you had at the time of each bury. This was created using Google's Chart API.

For fixes this release fixes a bug that was causing some buries to not be captured. This was due to server delays and browser issues. The fix has been tested and appears to be working, but if you notice an issue contact us using the Contact button on this blog.

Well, that is a run down of the new features and fixes with this release, so please let us know of any bugs you find. Also, if you find any interesting information using the tool you can sign up for a free account on this blog and write a post like this one that has a chance to appear on the home page of Ajaxonomy.com (a.k.a. this blog).

In case you don't have the link to application click here to be taken to the application.

Update:
It has been pointed out that the application does not capture 100% of Digg bury data. It does capture 100% of Digg Spy's bury data, which captures all buries for upcoming and popular stories. However, it does not capture buries from stories that have been fully buried as these can no longer make the Digg home or popular pages. So the tool will show you the buries that matter (the buries that keep you off the home page).

Thanks to everyone that pointed out this clarification.

The Weight of a Bury

Tagged:  

You may have seen my last post about the launch of the Bury Recorder (click here to ready the original post), well, to fully test the application I ran the Digg story of the post through the Bury Recorder application and I got some interesting results.

The story was fully buried (meaning it doesn't show in the upcoming, popular or hot sections of Digg) with 68 diggs. What was incredible was that it was only buried 4 times! Another thing that seems incredible is that the story as of the time of this writing has 107 diggs after being submitted 22 hours ago (when you think about it this is kind of amazing in itself as it was buried way back at 68 diggs). Another interesting this is that the first two buries where between 9:00pm and 11:00pm while the last two where between 5:00am and 7:00am. I'm not sure if this means anything, but it seams interesting how they where spaced. This ratio seems incredible to me that a bury would be given so much weight. The below image shows the actual data from the application.

I'm not sure if this is a fluke and if it would normally take a larger ratio (I know the algorithm changes depending on who dugg the story and the time of day etc..) to fully bury a story, so I'll be using this on some other Digg stories and write about my findings in the near future.

The nice thing about having the application is that we can now find out information like this which will at least take the mystery out of having a story buried on Digg.

While there's no current way to accurately determine the weight of a bury by using this recorder we may be able to gain future insight on the patterns of buried stories (number of buries required to fully bury the story, bury frequency, bury reasons, etc...). This could possibly help combat the rumored Bury Brigade.

You can go directly to the Bury Recorder application by clicking here.

Syndicate content