You are currently browsing the category archive for the ‘Uncategorized’ category.

Already well past 1.0! Tap has evolved a great deal. Many of the big changes are internal — the object APIs are now well-defined and consistent, the environment is MUCH simpler, more powerful, and easier to configure, and there is far less cruft.

I hope to write up more posts but for now an update showcasing ways you can make workflows. The new syntax is much cleaner and more pure.

 
  % tap load abc -: dump
  vs
  % tap run -- load abc --: dump

Commands (ex run) has been dropped. Now the main ARGV given gets split along breaks to get multiple smaller argvs, each of which gets turned into an object. For instance:

 
  ['--', 'load', 'abc']  # the '--' is implied
  ['-:', 'dump']

The leading argument identifies a constant, later args get parsed for configs, and the break itself determines what to do with the remainder. For instance the first set of arguments is equivalent to this:

 
  require 'tap/tasks/load'
  task = Tap::Load.new
  task.enq 'abc'

Breaks all start with a dash. These are various ways of writing the same workflow, each of which will print ‘abc’:

  # as above, '-:' indicates a sequence join
  % tap load abc -: dump

  # the implied '--' means 'enque the next object'
  % tap -- load abc -: dump

  # objects can be defined without enque using '-'
  % tap -- load abc - dump - join 0 1

  # you can reorder as long as you keep the identifiers straight
  % tap - dump -- load abc - join 1 0

  # now use a signal to enque the load task
  % tap - load - dump - join 0 1 -- signal enq 0 abc

  # a little cleaner
  % tap - load - dump - join 0 1 -/enq 0 abc

  # still cleaner
  % tap - load - dump - join 0 1 -@ 0 abc

  # a signal sent to the load task directly
  % tap - load - dump - join 0 1 -/0/enq abc

  # now purely through signals
  % tap -/set 0 load -/set 1 dump -/bld join 0 1 -/0/enq abc

The last example is verbose but indicates the underlying nature of Tap — signals. Tap is now driven entirely by signals such that workflows can be created and driven interactively through a prompt, and later the syntax will translate to the web directly.

  % tap prompt
  /set 0 load
  #<Tap::Tasks::Load:0x1017d9298>
  /set 1 dump
  #<Tap::Tasks::Dump:0x1017b5528>
  /bld join 0 1
  #<Tap::Join:0x1017ac298>
  /enq 0 abc
  #<Tap::Tasks::Load:0x1017d9298>
  /run
  abc
  # future (for example)
  localhost:8080/app/set?var=0&class=load
  localhost:8080/app/set?var=1&class=dump
  ...

Signals mean that workflows can be written out to taprc files:

  [taprc]
  set 0 load
  set 1 dump
  bld join 0 1
  enq abc
  % tap --- taprc
  abc

Objects you load through a tapfile are stored so you can continue to interact with them:

  % tap --- taprc -@ 0 xyz
  abc
  xyz

The latest Tap also adopts and extends the rakish syntax that was Rap (Rap is now gone, FYI).

  [tapfile]
  desc "run a workflow"
  work :example, %q{
    - load
    - dump
    - join 0 1
  }
 
  % tap example abc
  abc

As with a rakefile, tapfiles can define new tasks, with configurations and extended documentation (1.9.1 output):

  [tapfile]
  # This documentation will show up if you run:
  #   % tap sort --help
  
  desc "sort a string by word"
  task :sort, :reverse => false do |config, str|
    words = str.split.sort
    config.reverse ? words.reverse : words
  end
 
  % tap sort 'the swift brown fox' -: dump
  ["brown", "fox", "swift", "the"]
  
  % tap sort 'the swift brown fox' --reverse -: dump
  ["the", "swift", "fox", "brown"]

Tap is much more flexible than it used to be, but you’re still able to do all the things from before like define, package, and distribute tasks as ordinary ruby classes. Check out the documentation for more info.

Advertisements

In studying the inheritance of methods I came across what I consider a surprising behavior of modules included into classes.

First, when you include a module into a class, the module methods are available in the class and they also propagate down to subclasses. This is reflected in the fact that the module is added to the ancestors of both the including class and all subclasses (be the defined before or after the include).

 
class LateIntoClassTest < Test::Unit::TestCase
  
  class A
  end
  
  class B < A
  end
  
  module LateInClass
  end
  
  class A
    include LateInClass
  end
  
  class C < A
  end
  
  def test_including_a_module_into_a_superclass_adds_to_ancestors
    # LateInClass is added to A
    assert_equal [A, LateInClass, Object, Kernel], A.ancestors
    
    # LateInClass is added to B
    assert_equal [B, A, LateInClass, Object, Kernel], B.ancestors
    
    # LateInClass is added to C
    assert_equal [C, A, LateInClass, Object, Kernel], C.ancestors
  end
end

What is surprising (at least to me), is that the same is not true when you include a module into an included module. I thought modules were kind of like superclasses; if you add to a module then you add to everything that uses the module. Not so.

Here you can see LateInModule is not added to classes that already include A. By contrast, classes defined after the ‘late’ include will add LateInModule to their ancestors.

 
class LateIntoModuleTest < Test::Unit::TestCase

  module A
  end
  
  class B
    include A
  end
  
  module LateInModule
  end
  
  module A
    include LateInModule
  end
  
  class C
    include A
  end
  
  def test_including_into_an_included_module_DOES_NOT_add_to_ancestors
    # LateInModule is added to A
    assert_equal [A, LateInModule], A.ancestors
    
    # LateInModule is missing from B
    assert_equal [B, A, Object, Kernel], B.ancestors
    
    # LateInModule is added to C
    assert_equal [C, A, LateInModule, Object, Kernel], C.ancestors
  end
end

You might take from this the lesson that modules are not superclasses, they are collections of methods poured into a class by include. But that isn’t the full story either. After all, you can modify a module and still have those changes propagate into an including class.

 
class LateModuleModificationTest < Test::Unit::TestCase

  module A
  end
  
  class B
    include A
  end
  
  module A
    def late_method; true; end
  end
  
  def test_included_modules_MAY_be_modified
    assert_equal true, B.new.late_method
  end
end

I find it tricky to express this behavior descriptively, even though the cause is clear; modules only add to ancestors when first included in a class.

Update: for those who are interested, Redmine has a feature request regarding how modules get added to ancestors.

I’ve been trying to speed up the command line response of tap by judiciously loading only what needs be loaded up front.  This script has proven quite helpful… it’s a profiler for require/load. Simply add the requires you want to profile at the end of the script and run it from the command line:

  % ruby profile_load_time.rb
  ================================================================================
  Require/Load Profile (time in ms)
  * Load times > 0.5 ms
  - duplicate requires
  ================================================================================
  * 21.6: yaml
  *   0.5: stringio
      0.2: yaml/error
  *   2.1: yaml/syck
  *     0.7: syck
  *     1.2: yaml/basenode
          0.3: yaml/ypath
      0.3: yaml/tag
      0.3: yaml/stream
      0.3: yaml/constants
  *   15.9: yaml/rubytypes
  *     11.8: date
  *       1.2: rational
  *       4.0: date/format
  -         0.1: rational
  *   0.9: yaml/types

The output flags requires that take longer than 0.5 ms, and requires that occur multiple times. Long requires are often good candidates for autoload… if you want to have YAML available but feel 22 ms is too long to wait up front:

  autoload(:YAML, 'yaml')

Then the file will be required the first time YAML gets used, if at all.