Revit App Store

Today I published my first app in the Revit App Store. I think this way of distributing small add-ins for Revit can be very useful. As the first apps are distributed for free, I decided to make a small app that automatically numbers the parking spaces. If an user selects multiple parking lots, the user can specify a prefix, postfix, starting value and an interval for the numbering. The tool will also detect if the user will make duplicate mark values within the model, as duplicate marks are not preferable within Revit models.

I’m very curious how this App Store is developing in the upcoming months, as this can be a great opportunity for both developers and users of Revit. At this point the store isn’t very mature yet, for example, I think the buttons of the actual applications are located in a window of the app store in a secondary page, two mouse clicks away. I think the buttons should be located in the ribbon, directly accessible by the user. Anyway I keep an eye on the development of this app store.

Amazon S3 Query String Authentication and Ruby on Rails

Amazons S3 added this summer a new way to allow people to access your files stored on S3. This method is called Query String Authentication and by generating an url for a private file you can offer access to that file for a limited amount of time. First upload a file to Amazon S3 and use the acl ‘private’ to store the file. To generate the url for to such a link I’ve made a helper for Ruby on Rails:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
def generateTemporaryURL(resource)
  filename = "#{RAILS_ROOT}/config/amazon_s3.yml"
  config = YAML.load_file(filename)
  bucket = config[ENV['RAILS_ENV']]['bucket_name']
  access_key_id = config[ENV['RAILS_ENV']]['access_key_id']
  secret_access_key = config[ENV['RAILS_ENV']]['secret_access_key']
  expires = 10.days.from_now.to_i # 10 days from now in epoch time in UTC

  stringtosign = "GET\n\n\n#{expires}\n/#{bucket}/"+ENV['RAILS_ENV']+"/#{resource.gsub(" ", "+")}";

  signature = Base64.encode64(
                OpenSSL::HMAC.digest(
                  OpenSSL::Digest::Digest.new('sha1'),
                  secret_access_key, stringtosign.toutf8))
  # cleanup the signature for the url
  signature = signature.gsub("\n","").gsub("+","%2B")

  return "http://#{bucket}/"+ENV['RAILS_ENV']+"/#{resource.gsub(" ", "+")}?AWSAccessKeyId=#{access_key_id}&Expires=#{expires}&Signature=#{signature}"
end

This method generates an url which is valid for 10 days. I’m now 100% sure of the gsub method to convert spaces in a resource filename to a plus sign, maybe there are more characters you should be aware of. Anyway this method is based on the SWF Upload post.

Update: Thanks to Simon, I’ve updated the example where it would produce an invalid signature. The plus sign in the url needs to be url encoded.

Update 2: Rob provided a Ruby 1.9 only version.

Passenger and REE on Ubuntu 10.04

UPDATE: that didn’t work well, now I’m using the passenger 3 from the gem which comes with REE. No installation of passenger from brightbox, only the REE, and then run the passenger-install-apache2-module. Done ;-)

A small post on how to install Passenger and Ruby Enterprise Edition on Ubuntu 10.04.

add:

1
deb http://apt.brightbox.net lucid main

to /etc/apt/sources.list.d/passenger

Get the public key to satisfy apt-get:

1
wget http://apt.brightbox.net/release.asc

Install the key:

1
sudo apt-key add release.asc

Get the latest version of REE from http://www.rubyenterpriseedition.com/download.html

1
wget http://rubyforge.org/frs/download.php/71100/ruby-enterprise_1.8.7-2010.02_i386_ubuntu10.04.deb

Install this package:

1
sudo dpkg -i ruby-enterprise_1.8.7-2010.02_i386_ubuntu10.04.deb

Install passenger:

1
sudo apt-get install libapache2-mod-passenger

Restart apache:

1
sudo service apache2 restart

Done!

MiniGeocode 0.0.1 Released

Today I released my first Ruby on Rails plugin. This plugin does some basic geolocation lookups. It can be installed by issuing the gem install mini_geocode command. To add it to Rails 3 add gem 'mini_geocode' to your Gemfile in the root of your rails project. To use it in a Rails 2.x installation, add config.gem 'mini_geocode' to you environment.rb. This plugin has no dependencies so the installation should be very easy.

Why Is My KmlLayer in Google Maps V3 Not Working?

If you use some code like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<script type="text/javascript">
  function initialize() {
    var options = {
      mapTypeId: google.maps.MapTypeId.SATELLITE,
      streetViewControl: true
    };
    var map = new google.maps.Map(document.getElementById('map'), options);
    var kmlLayer = new google.maps.KmlLayer('http://127.0.0.1:3000/kmlfile.kml');
    kmlLayer.setMap(map);
  }

  function loadScript() {
    var script = document.createElement("script");
    script.type = "text/javascript";
    script.src = "http://maps.google.com/maps/api/js?sensor=false&callback=initialize";
    document.body.appendChild(script);
  }
  window.onload = loadScript;

</script>

In a typical Ruby on Rails development environment you will have your server running on your localhost on port 3000. The way Google Maps accesses your KML file is not directly from the script locally, but via Google itself. So the solution is to have your KML online accessible by Google Maps. I hope this tip saved you some valuable time! ;-)

Force File_column to Regenerate the Thumbnails

A small post to show how to let file_column to regenerate the already generated thumbnails.

1
2
3
4
5
def update_attributes(att)
  self.path = File.new(self.path, "r")
  self.save
  super(att)
end

When you put this piece of code in the model where the file_column is used, the only thing that needs to be changed is the ‘path’ variable into the file_column name. The update_attributes method is often used in the update method in the controllers.

VMware ESXi 4.0 + QNAP TS-410

UPDATE: There are problems with iSCSI on the QNAP with VMware. After I wrote this blog post my complete setup crashed and I lost all my virtual machines. So this blog post shows how to setup iSCSI with VMware, but do not use it with the QNAP’s. Right now I’m using NFS as a datastore with 7 virtual machines without any problems.

I’ve updated my server setup at home with some new hardware. First of all a new QNAP TS-410 NAS with 4 Western Digital Caviar Green 2TB disks attached to the NAS. For my new server I will use an old HP dc7700 P4 with 5 Gb of RAM. To connect the NAS to the server I took a Cisco 8 port Gigabit switch (SLM2008).

QNAP TS-410 First of all I installed the disks and booted the system for the first time. Using the configuration utility I configured the disks in a RAID 5 setup, so totally I have 6TB of disk space available. During this setup I also installed the latest firmware, which I took from the QNAP website. The synchronisation of the RAID 5 setup took over 24 hours! Which is quite long, but this is only done once. The reason I bought this TS-410 is because this is the cheapest 4 disk NAS with iSCSI enabled. The iSCSI we can use to connect the VMware ESXi 4.0 to the storage. In the QNAP web interface the iSCSI service should be enabled and a iSCSI target and LUN should be created. A target is similar to the SCSI card we used years ago and is used to connect the remote system with. The LUN’s are similar to physical disks, but with iSCSI these are virtual. Now we have created the basis of our storage network for the VMware ESXi 4.0 server.

VMware ESXi 4.0 First I cleaned the old HP dc7700 pc and removed the hard disks. Next I inserted an flash drive of 1 Gigabyte, but this one turned out to be broken. So I took a 2 Gigabyte disk, but 1 is sufficient. I booted from the installation CD and installed the ESXi server on the flash drive. After rebooting I was able to connect with my browser to the server. Using the console a fixed IP was given to the server. Next we need to configure the VMware ESXi 4.0 server using the vSphere Client which can be downloaded from the management IP address.

VMware vSphere Client In VMware vSphere Client we need to get the data storage working as the server doesn’t have local storage. So open server -> configuration -> networking to add a VMkernel port to the virtual switch. This is to allow the storage adapter access the NAS.

vmware-networking

Now goto storage adapters and select the properties of the iSCSI adapter. Click configure and select enable to enable the iSCSI adapter, click on dynamic discovery and add your NAS ip:3260. It is now possible to see the device in the details below.

vmware-storage vmware-storage2 vmware-storage3

In the storage configuration we need to add the LUN to VMware so this LUN can be used in a virtual machine. vmware-storage4

Cisco Switch To connect the NAS with all network interfaces to the switch, the switch has to be properly configured. The ports need to have the LACP option enabled.

vmware-network-2

Once this is done in the management interface of the switch, the NAS needs to be configured to use the IEEE 802.3ad Dynamic Link Aggregation.

nas-network-1

Finally the second network interface can be connected to the switch. Use the management interface on the switch to see if it is working correctly.

nas-network-3

Now we have a nice system to experiment with. The only thing what I have to do now is to migrate my existing CentOS XEN server to this VMware system.

Ruby on Rails MiniWiki Plugin Released

To create a wiki in a Ruby on Rails app, I’ve created a plugin called: MiniWiki. With 3 simple steps it is possible to add a very basic wiki to your application. The only dependency is RedCloth and the generator creates a migration for only two tables in your database. For more detailed information see the Github page.

Try here

SWFUpload Direct to Amazon S3 in Ruby on Rails

I’m working on various projects and for one certain project this company wanted a file sharing website, like yousendit.com for example, but the site should be in-house. I proposed Amazon S3 for the storage of the files, otherwise the VPS will become very expensive. This file sharing website should also handle large files, so a reliable upload method is desired. SWFUpload is a well known Flash upload application. So the requirements are now complete: Ruby on Rails, Amazon S3 and SWFUpload.

First I created a config file to enter my Amazon S3 credentials. The credentials are dependent on the Ruby on Rails environment.

config/amazon_s3.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
development:
  bucket_name: BUCKET_NAME
  access_key_id: ACCESS_KEY_ID
  secret_access_key: SECRET_ACCESS_KEY

test:
  bucket_name: BUCKET_NAME
  access_key_id: ACCESS_KEY_ID
  secret_access_key: SECRET_ACCESS_KEY

production:
  bucket_name: BUCKET_NAME
  access_key_id: ACCESS_KEY_ID
  secret_access_key: SECRET_ACCESS_KEY

Created a controller and added the index method. The method reads the S3 settings from the config file and generates the fields required for SWFUpload and S3.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
  def index
    filename = "#{RAILS_ROOT}/config/amazon_s3.yml"
    config = YAML.load_file(filename)

    bucket            = config[ENV['RAILS_ENV']]['bucket_name']
    access_key_id     = config[ENV['RAILS_ENV']]['access_key_id']
    secret_access_key = config[ENV['RAILS_ENV']]['secret_access_key']

    key             = ENV['RAILS_ENV']
    acl             = 'public-read'
    expiration_date = 10.hours.from_now.utc.strftime('%Y-%m-%dT%H:%M:%S.000Z')
    max_filesize    = 2.gigabyte

    policy = Base64.encode64(
      "{'expiration': '#{expiration_date}',
        'conditions': [
          {'bucket': '#{bucket}'},
          ['starts-with', '$key', '#{key}'],
          {'acl': '#{acl}'},
          {'success_action_status': '201'},
          ['starts-with', '$Filename', ''],
          ['content-length-range', 0, #{max_filesize}]
        ]
      }").gsub(/\n|\r/, '')

    signature = Base64.encode64(
                  OpenSSL::HMAC.digest(
                    OpenSSL::Digest::Digest.new('sha1'),
                    secret_access_key, policy)).gsub("\n","")

    @post = {
      "key" => "#{key}/${filename}",
      "AWSAccessKeyId" => "#{access_key_id}",
      "acl" => "#{acl}",
      "policy" => "#{policy}",
      "signature" => "#{signature}",
      "success_action_status" => "201"
    }

    @upload_url = "http://#{bucket}.s3.amazonaws.com/"
  end

And the index.html.erb view.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
<% content_for :head do %>
<link href="/stylesheets/swfupload.css" rel="stylesheet" type="text/css" />
<% end%>

<script type="text/javascript" src="/javascripts/swfupload/swfupload.js"></script>
<script type="text/javascript" src="/javascripts/swfupload/swfupload.queue.js"></script>
<script type="text/javascript" src="/javascripts/swfupload/fileprogress.js"></script>
<script type="text/javascript" src="/javascripts/swfupload/handlers.js"></script>
<script type="text/javascript">
  var swfu;

  window.onload = function() {
      var settings = {
          flash_url : "/assets/swfupload.swf",
          upload_url: "<%= @upload_url %>",
          http_success : [ 200, 201, 204 ],        // FOR AWS
          
          file_size_limit : "2 GB",
          file_types : "*.*",
          file_types_description : "All Files",
          file_upload_limit : 100,
          file_queue_limit : 0,
          file_post_name : "file",                // FOR AWS
          
          custom_settings : {
              progressTarget : "fsUploadProgress",
              cancelButtonId : "btnCancel"
          },
          debug: <%= ENV['RAILS_ENV']=='development' ? 'true' : 'false' %>,

          // Button settings
          button_image_url : "/images/buttonUploadText.png",
          button_placeholder_id : "spanButtonPlaceHolder",
          button_width: 61,
          button_height: 22,
          
          // The event handler functions are defined in handlers.js
          file_queued_handler : fileQueued,
          file_queue_error_handler : fileQueueError,
          file_dialog_complete_handler : fileDialogComplete,
          upload_start_handler : uploadStart,
          upload_progress_handler : uploadProgress,
          upload_error_handler : uploadError,
          upload_success_handler : uploadSuccess,
          upload_complete_handler : uploadComplete,
          queue_complete_handler : queueComplete,   // Queue plugin event
          
          post_params: <%= @post.to_json %>        // FOR AWS
      };

      swfu = new SWFUpload(settings);
     };
</script>

<div id="content">
  <form id="form" action="/upload/upload" method="post" enctype="multipart/form-data">
          <div class="fieldset flash" id="fsUploadProgress">
          <span class="legend">Upload Queue</span>
          </div>
      <div id="divStatus">0 Files Uploaded</div>
          <div>
              <span id="spanButtonPlaceHolder"></span>
              <input id="btnCancel" type="button" value="Cancel All Uploads" onclick="swfu.cancelQueue();" disabled="disabled" style="margin-left: 2px; font-size: 8pt; height: 29px;" />
          </div>

  </form>
</div>

Upload the file crossdomain.xml to the root of your bucket. This is for Flash to upload to a different domain.

1
2
3
4
5
<?xml version="1.0"?>
<!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
  <allow-access-from domain="*" secure="false" />
</cross-domain-policy>

Finally took the files from the SWFUpload simpledemo and placed them in the following directories:

  • swfupload.swf in public/assets/
  • fileprogress.js, handles.js, swfupload.js and swfupload.queue.js in public/javascripts/swfupload/
  • buttonUploadText.png in public/images/

Now SWFUpload should be working in your Ruby on Rails application.

Callback For my application I needed a callback to let my application know there was a file successfully uploaded to the S3 bucket. To get this functionality I added a function to the controller and modified the handlers.js file.

1
2
3
4
5
6
7
8
9
10
11
  def upload_done
    file = ShareFile.new

    file.name = params[:name]
    file.filestatus = params[:filestatus]
    file.filetype = params[:type]
    file.size = params[:size]
    file.s3_available = true

    file.save
  end
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
function uploadSuccess(file, serverData) {
  // HERE: Send a notification upload has succeeded
  new Ajax.Request('/share/upload_done?'+Object.toQueryString(file), {
      method:'get',
      asynchronous: false,
      onSuccess: function(){
          var progress = new FileProgress(file, this.customSettings.progressTarget);
          progress.setStatus("Sending meta data.");
      }
  });
  // HERE: end
  
  try {
      var progress = new FileProgress(file, this.customSettings.progressTarget);
      progress.setComplete();
      progress.setStatus("Complete.");
      progress.toggleCancel(false);

  } catch (ex) {
      this.debug(ex);
  }
}

When SWFUpload is done uploading it uses the javascript callback to update the status of the form and to send a notification to the Ruby on Rails application.

Good luck!

XEN, Ruby Enterprise Edition and 4gb Seg Fixup

My new CentOS XEN server has a virtual machine which serves as a dedicated web server. The on the console and in /var/log/messages the following message appeared:

1
2
4gb seg fixup, process ruby (pid 20252), cs:ip 73:00e0a636
printk: 151939 messages suppressed.

The console is unusable, because every second a new message appears. The logfile is unusable as well, because it is very large and takes long to open in a text editor and interesting messages are difficult to find. After some google’ing I’ve found the following page with some instructions to fix it. I’ve mixed up various methods, but this is the most robust one to use. It creates a wrapper around the gcc and g++ binaries with the correct parameters.

1
2
3
4
5
6
7
8
mv /usr/bin/gcc /usr/bin/gcc.orig
mv /usr/bin/g++ /usr/bin/g++.orig
echo '#!/bin/sh' > /usr/bin/gcc
echo '#!/bin/sh' > /usr/bin/g++
echo 'exec gcc.orig -mno-tls-direct-seg-refs $@' >> /usr/bin/gcc
echo 'exec g++.orig -mno-tls-direct-seg-refs $@' >> /usr/bin/g++
chmod a+x /usr/bin/gcc
chmod a+x /usr/bin/g++

Extract the Ruby Enterprise Edition. Be careful to use a fresh extracted version to compile, because the files will not be recompiled. Compile and install Ruby Enterprise Edition as stated in the manual.

1
2
tar zxvf ruby-enterprise-1.8.7-2009.10.tar.gz
./ruby-enterprise-1.8.7-2009.10/installer

Don’t forget to reinstall passenger, and off course reinstall all the gems from the old installation.

Now you can restore gcc and g++ as this will probably break yum updates of gcc and g++.

1
2
rm -rf /usr/bin/gcc && mv /usr/bin/gcc.orig /usr/bin/gcc
rm -rf /usr/bin/g++ && mv /usr/bin/g++.orig /usr/bin/g++

Hopefully I’ve saved you guys some time ;-)

Copyright © 2013 - Tom Pesman - Powered by Octopress