Blog

Mar 07 2005

Comments

I *heart* Rails

(With any luck, this is just part one of an ongoing series)

So, I like Ruby a lot, intellectually speaking. I love the duck typing, the mixins, I love closures and blocks, I dig the direct OpenSSL support, etc. Having never been paid a dime to actually produce anything of value in it, though, I wasn't able to speak to its productivity as a professional language before.

Now, however, that has changed (sort of). I've been working on an application using the Java/Spring/Hibernate/JSTL stack (Mr. Geary, Mr. Lewis Shipp, please don't kill me for that last bit). The application is not overly complicated, though the domain model is a tad hairy. I've been working on the application for more than 5 months now. I recently decided to start re-implementing it in Rails. My experience has been surprising.

First of all, I've been able to re-implement 80% of the functionality in just under four nights of work. Some of that most assuredly has to do with the fact that I understand the domain pretty thoroughly by this point. But a lot of it has to do with the sheer productivity of the framework. Given that I have complete control over the data schema, I was able to make a total of 7 edits to the schema to get it to work perfectly with Rails' defaults for mapping. That alone has saved me thousands of keystrokes.

Secondly, implementating changes and running tests takes no time whatsoever. The configure, compile, deploy, reset cycle for running tests is time consuming on my original stack, and non-existent with Rails. This particular facet caught me off guard; I wasn't expecting that to make much of a difference to the overall experience.

There's a lot of other things I like about the framework, like the view technologies (partials and helpers, rhtml, etc.), the well-constructed controller hierarchy, and the new routing features, all of which I'll probably tackle in further posts. But mostly, I wanted to say this:

Rails is actually faster.

At runtime, the Rails implementation is at least as fast as the original stack in almost every case, and for a not-insignificant portion of actions, actually performs better. I haven't run benchmarks yet, but I will as this effort progresses, but I was shocked (shocked, I say) to discover this.

I don't know if I can switch the project over to Rails; I haven't even mentioned this experiment to my customer yet, so I don't know if this will end up being my first commercial Rails project or just an experiment. But regardless, I can now say that I hope it won't be my LAST Rails project.

Mar 07 2005

Comments

Milwaukee No Fluff a Great Success

The inaugural No Fluff Just Stuff tour event for 2005 went off swimmingly last weekend in Milwaukee. I introduced our new security talks there (Crypto for Programmers, and Writing Secure Web Services for both Axis and .NET). I was very anxious to see not only how well attended the crypto talk would be, but how well received. It turns out that on both counts, the presentation was a smashing success and I'm now very excited to get up to Philadelphia this weekend and give it again. This weekend, though, we're introducing the follow-up presentation (tentatively titled Applied Cryptography, though we don't want to make Mr. Schneier mad, so we'll probably change it ASAP).

The second part of the talk will walk through certificates, SSL/TLS, PKI and Kerberos, looking at the implementations and their challenges. We'll even talk a little about alternate trust management solutions for the truly paranoid. The two presentations are a great one-two punch for understanding the building blocks of cryptographic identity and authorization systems and then seeing how they combine to form the platforms we use today.

Feb 23 2005

Comments

SysInternals releases rootkit detection tool

The team at SysInternals has just released RootkitRevealer, which does essentially what Microsoft's still-internal Ghostbuster tool does; namely, it runs a couple of scans of the hard drive from different levels (user mode and kernel mode, essentially). Any discrepancy could potentially be the result of a rootkit attempting to hide its tracks.

In an earlier post, I talked about the applicability of tools like Ghostbuster vs. things like Tripwire. Seems like the guys at SysInternals are in agreement, noting that RootkitRevealer could still be tricked by really sophisticated kernel-mode rootkits, and that off-line (CD bootable, etc.) scanners would be more sound, but still potentially susceptible to false negatives.

Regardless, big kudos to them for getting the tool out there and not making us all wait for Ghostbuster. Plus, now we don't have to put up with a bunch of "he slimed me" jokes.

Feb 15 2005

Comments

MS GhostBuster

Bruce Schneier points out, rightly, that Microsoft should release its internal-only security tool (codenamed GhostBuster) as an open source project, or at the very least, as a commercial tool for people managing Windows deployments. He is absolutely dead on; no matter what state this product is in, it is in Microsoft's, and all of our, interest to get that in the hands of the public as soon as possible.

However, I'm not quite as keen on this tool as I am on something like Tripwire. GhostBuster relies on something inherently unreliable to do its job: the behavior of malicious software. One of the most common kinds of security problems found in the wild are rootkits. These are pieces of software that live on the machine, waiting for instructions from off-server (usually from an IRC server). More insidiously, they usually hijack system services to hide their existence, by filtering system requests for directory listings, registered ports, etc.

GhostBuster works by running a program within the potentially affected OS to, essentially, list all the files on the machine (its more complicated than that). Then, you boot to the GhostBuster CD and run the same program, this time from the bootable OS on the CD. This version of the program won't be infected by any rootkits, and any files hidden during the first run will show up. GhostBuster then compares the two result sets and alerts you to the different (malicious) files.

This is a great idea, but as I said earlier, it relies on assumptions about what the rootkit is doing in order to work effectively. If a rootkit adapts to recognize when the GhostBuster program is being run, it can choose not to hide the offending results from the program. This will cause a tit-for-tat war of escalation, exactly like what we have in the anti-virus world.

Tripwire, and tools like it, work differently. You only run these tools from bootable CD, thus ensuring that they are never tainted. The tools check the files on a given disk, and record statistics about those files. Using highly configurable rules, the app then checks the results of future runs against the baseline, alerting you to changes. This means that, instead of fighting against the behavior of malicious code, you are fighting against the behavior of your own system administrators (changes will either be malicious code or normal changes to the system, and your rules have to rule out the good changes). This strikes me as an inherently more stable approach.

However, I'd love to have GhostBuster in my own tool-chest, and a truly effective rootkit detection strategy would employ both approaches.

Feb 07 2005

Comments

Better, Faster, Lighter Java gets terrible review

Here's the text of the one-star review.

"This book with its talk of the business "sponsor" was not even spell checked. The code will not compile because in Java declarations are case sensitive. Whole paragraphs are repeated. The content is vague and seems to be more of a rant than an attempt to teach.

Horrible. One star is generous."

This reviewer sent an even more thorough review to O'Reilly directly, but as that was a private email, I'll not repost those comments here.

Mr. Braithwaite may very well have some legitimate points. Our content might be somewhat vague, though I rather like to think of it as "general". I also don't see it as a rant, but a reader who is heavily invested in some of the technologies we target might very well feel ranted at, I suppose.

My only problem with the review? If you find a spelling mistake or an editorial problem, please let us know so it can go on the errata page. Luckily, we were able to figure out where the word "sponsor" was misspelled and add it to our list....

(p.s. the code problems were the result of MS Word's auto-capitalization. We should have caught them nonetheless, but at one point, all that code compiled......and even ran! :) )

Feb 05 2005

Comments

What is a Service?

Ted Neward has begged for a definition of "service", fearing that SOA will become just so much marketecture littering our already littered landscape. Well, for starters, too late.

However, since he asked, here's my definition of a "service".

Service. n. (sir-viss) A unit of functionality that has a better than 10% chance of being called by an alien machine.

To further clarify: "a unit of functionality" means some set of related abilities of a system, like a bunch of document transformation code, or ways of processing credit cards, etc. "better than 10% chance" is a totally off-the-cuff WAG (wild-assed guess). "alien machine" means alien in the "not this machine" as well as the "foreign os/platform/vm" senses.

Because the definition of a service says that it represents functionality to be called from somewhere else, and since conventional wisdom about distributed application calls has already settled into advanced rigormortus stage, the transitive property of technical definitions applies and our definition of "service" implicitly includes the notion of highly granulized access levels to our functionality and the minimization of round trips and tight coupling to type systems, runtime support, or stateful connections.

There's no need to make "services" out of library calls that will only be accessed by local or semi-local code. Likewise, if the functionality has a pretty good chance of being accessed by an alien, you'd better make it a service.

Ted, what's missing from this definition?

Feb 02 2005

Comments

How can JDO's rejection be "a victory"?

Don's been running with this meme started by Ted Neward about how OR/M frameworks are like the Vietnam war: tons of wasted resources and no clear exit strategy. Don has posted several times about this, clearly coming down on the side of the anti-OR/M folks. One of the reasons is that there are so many frameworks, so many developers chasing the same goals, throwing resources after the same problems, butting up against the same laws of diminishing returns.

Which is why I don't understand how the fall of the JDO 2.0 spec in the JCP is a "victory". The JDO 2.0 spec would have standardized the platform and, I think, brought the overall number of competing frameworks down somewhat. As the big JDO suppliers (both commercial and opensource) solidified around an accepted standard, there would be fewer and fewer open source OR/M frameworks springing up all the time. At the very least, the big boys would have a central sandbox to play in. Without the endorsed spec, we are left pursuing a lot of individual agendas and waiting to see what happens when EJB 3.0 finally makes it onto paper.

What I'd really like to see is the JDO 2.0 spec win the reconsideration ballot and get some real community momentum behind its various implementations, which would in turn keep the pressure on the EJB 3.0 team to create a workable replacement for CMP (or, more preferably, just a plugable facade behind which we can place whatever already working data layer we want).

Jan 30 2005

Comments

MySQL Worm and Microsoft

I am a big fan of Robert Hensing's blog, which details his dealings with security problems in the Windows world. He's got strong opinions, but he reports really interesting details about fighting the good fight on the Microsoft side of the world.

So I was fairly surprised when his post about the new MySQL worm on Windows started with this sentence: "So it seems that there is a new MySQL bot that is spreading to Windows machines running MySQL with weak SA (or whatever MySQL's equivalent is) passwords." If the head of Microsoft's incident response team can't be bothered to even learn the name of the feature that is causing the problem, just because it isn't a Microsoft product, that doesn't instill a lot of confidence in their ability to work well with the community.

Robert probably didn't mean to sound that way, but he should probably realize that, as a public face of Microsoft's security team, it might be better to at least pretend to care what other people are doing and how they do it.

Jan 29 2005

Comments

FileVault is my enemy

Being the good security citizen that I am, I thought I would give Apple's FileVault a try. FileVault encrypts your home directory, which is useful for somebody whose laptop is often left unattended in a classroom during cookie breaks, like mine is. Turning FileVault on is a cinch, and performance-wise, I barely noticed a difference. The only problem I ever had was that when you delete files, the space isn't reclaimed on disk until you allow FileVault to compress the space, but that only takes a few seconds whenever you log out, so I was happy.

Then, I decided to upgrade from 10.3.4 to 10.3.7. The update proceeded normally, without warnings or questions. But when I rebooted, it was as if I had installed OS X from scratch; no personal settings, no environment settings, nothing. The only things that survived were my username and password. Everything that eminated from my home directory was gone.

To date, I have no idea what really happened. My assumption is that the upgrade process saw a large, encrypted data block and decided "I don't know what to do with that, and it is in my way, so I'll get rid of it". The saddest part of all this is that FileVault is intended to *protect* your data. Sigh.

Jan 24 2005

Comments

Javagruppen Revisited

I had a really eye-opening time taking part in a panel discussion at the end of the Javagruppen conference. Considering that in attendance were the guys who make Spring, a member of the Geronimo team, a member of the EJB 3.0 spec working group and several loud-mouthed interlopers (I consider myself firmly in that last camp), the panel discussion had a real potential to be a shouting match. Especially since there were some extremely declaratory statements made about the usefulness of EJB early in the weekend. It was a big pleasure, therefore, to see everyone really rise to the occasion and come to some agreement about the relative merits of the different approaches.

Most specifically, when one attendee brought up the question of simplicity vs. complexity, the discussion became really useful. It is just presupposed that lightweight containers are "simpler" and EJB is "more complex". But as Stuart pointed out to us all, sometimes it is pretty useful to be hit upfront with the complexity. EJB *is* more complex, but doesn't pretend otherwise. It can be useful to confront that complexity all at once if, in fact, you need all that complexity to achieve your end goal. If, however, your end goal can be achieved without it, then a lightweight container which doesn't pound you over the head with complexity up front can allow you to get your work done faster. If and when you need the complexity, you can lift the lid and look underneath.

The difference between the two approaches isn't so much the relative complexity, but the modularity and plugability of the different approaches. The lighter frameworks put a premium on being able to ignore features you don't need, and replace them if a better implementation comes along, while the "heavier" J2EE containers bake most of those choices into the framework and make you deal with the configuration complexity (at the very least) from the opening bell. But the truth is, sometimes that's just what you need.

Popular Tags