Yet More Development Angst


As is obvious by my previous post, I’m completely disenchanted with the preachings of the patron saints of technology and their consistent gear switching on “best practice”, “best language”, “best framework”.

Best practice is just that, what works well in practice, not theory. And theory is the domain in which they dwell. I get tired, as much as most of you do I’m sure, of beating the “best practice” drum only to have “best practice” thrown to the wayside for the “just get it done” mentality that EVERY corporation, when put to it, adheres to.

“Best language” and “best frameworks” are both very subjective terms and I realize this. I do, however, expect prophets of a particular “best **/*” to stick to one or maybe two for a few years before they declare the “next” best language or framework.

Just as these same cats a few years ago were mocking client side work and declaring that everything should be done on the server side, they’ve now switched gears again and declared it should be done on the client with minimal calls to the server. There will be yet another switchback in the next few years as more and more client side code becomes unmaintainable.

All of the switching and technological “discovery” is perfectly fine and good, but if you ask yourself the simple question “Has development gotten easier with the advent of all these new technologies” you’ll find the answer is no – emphatically. Not only is it no, it’s an order of magnitude more difficult to properly design, implement, and pay for an enterprise level application than it was just 10 years ago…

Users not only expect but demand the same type of functionality and responsiveness that they had 10 years ago when VB desktop apps ruled the world. And here we are, still mired in the same transport level crap we started out in…


Ruby Can Save The World!!!


(This is an excerpt from an email, but I felt that rant wasn’t in a public enough forum)…

*** Starting Rant ***

“Why do we continue to re-invent the (broken) wheel?”

The talking heads continue to keep themselves popular on the conference circuit by declaring “new” languages sexy. Where was Martin Fowler when I was playing around with Ruby in the late 90’s? Declaring Java the next sexy language.

Do I like Ruby / Groovy / ${insert cool language here} ?

Sure, they’re all cool.

I like Lua myself, and have had fun the perl, python, smalltalk, ADA, and a score of other languages in the past. But how about we stop trying to fix the symptoms of our programmatic issues by creating new languages and instead focus on the REAL problems behind them.

Why does web development suck compared to the days of desktop development? First and foremost is the transport, and not far behind is the statelessness. Statelessness has been solved somewhat, but why is it that web development has been going on solid since the 90’s and we still have to be concerned with this paradigm of request-response? Why hasn’t this communication layer been abstracted away so that development can focus on the important aspects of the project.

JSF and other overly complicated frameworks may help with that issue but introduce plenty of problems of their own. We’re absolutely destroying the KISS principle every time we adopt the next flashy framework or language that promises easy set up and next to nothing maintenance – these always have a cost and rarely fix the problem.

How do we fix this? Not with a language or a framework, but a tool. A new browser. One that is actually INTENDED to serve applications and be more than a terminal (and no, Flex, OpenLazlo, etc. won’t suffice – those are shoehorn patches). At least that would be a great start and the best first step. Then follow that up with some other improvements to ease data access, configuration, etc…

*** Rant Wind Down ***

Above all I’m beginning to tire of the declarations, musings, and “The Thinker” posings of the Fowler’s, Eckel’s, and Cockburn’s of the world. Look closely (or not so closely) and it’s apparent they have their own agendas. They are paid extremely well to pontificate and then talk about their pontifications on the circuit. They are paid just as well the next year on said circuit when they refute what they had spewed just 12 months before…

Not that these guys don’t have valuable insight and a vast amount of experience, but they make mistakes – consistently. That’s why every other year they are touting the next greatest technology… A good example was the panel of last year’s NFJS downright berating Struts – a framework they were in love with only a year or two before.

So my advice is take what they preach, decide on your own if it makes sense, and then either disregard it or tuck it away for use later. Some of the ideas bandied about simply aren’t practical (pair-programming being one of them) while others are just goofy (writing a test for a class that doesn’t exist just to watch the test (amazingly) fail).

Let the TDD zealots and XP / Agile adepts begin posting!

*** End Rant ***

I apologize for this rant but I’m mired in writing documentation and I have reached my boiling point and have had my fill of Magic Cure All Languages (MCAL for short)…

Also, feel free to take my advice above and disregard any and / or all of my musings 🙂


Maven 2 Remote Repositories – Part II


It appears that archiva doesn’t work right out of the box – at least not for it’s current version. After downloading and building the project it was still throwing configuration exceptions and wouldn’t deploy. So I searched around jira and found a fix for the bug. After following the prescribed steps and creating my own basic archiva.xml in my .m2 directory it worked, at least the test did…

When I continued on to deploying the standalone version to my destination server there was another issue – a NamingException. Turns out someone checked in the plexus.xml config that duplicated a datasource. I just had to go to the conf/plexus.xml file and fix it… I crossed my fingers, closed my eyes, and ran the script…

It worked!

Now for configuration…

Follow the directions to set up your managed repositories and the repositories that they proxy. Pretty straightforward and works out of the box. The tricky part is setting up your settings.xml

It appears that at this time just setting up mirrors doesn’t work unto itself. Mirroring works for any non-plugin repositories. However, for each plugin repository you will need to set up pluginRepository elements in a profile. This is clunky and will hopefully get worked out as the product matures.

The last tidbit that took me a while to figure out is this: Any connection to the managed archiva repository is expected to be secure – meaning it wants a userid and password. This was not abundantly clear in the documentation… You need to set up a server entry in your settings.xml for each mirror / pluginRepository that you you plan on proxying. The userid and password are those that are defined in archiva. I simply defined a maven-user user with no password and assigned to it the role of Repository Observer.

Once you have these set up you are good to go!

Tags: , ,

Maven 2 Remote Repositories


In Maven 1.x the repositories were simple – there wasn’t a difference between a local repository and a remote repository. The layouts were the same and there wasn’t additional information in one that wasn’t contained in the other. The only variant was where the repository was located.

In Maven 2.x that all changed. With the addition of transitive dependencies everything got a little more complicated. I will attempt to explain…

A remote repository, and local for that matter, contain a few more files. The obligatory jars are still there, as are deployed poms. The additional files come in the way of metadata and their checksums.

Each artifact has at it’s root level (i.e. not by version) a maven-metadata.xml file (on the server) or multiple maven-${serverId}-metadata.xml files that contain all the releases of the artifact, as well as the latest and released versions and it’s deployed timestamp (on the remote) or it’s downloaded timestamp (on the local).

These files are used for a couple of things. The first is to allow maven to check for updates based on time. If you have repositories in your settings.xml or POM that allow updates (daily for example) Maven will check these timestamps and compare local versus remote to determine if a download is required. The second use is that of when a dependency is declared without a version. Maven will first check the local repository and it’s metadata to determine what the latest version of the artifact is and download if necessary.

This poses a small problem when trying to create an enterprise remote repository that doesn’t allow access to the internet at large. These metadata files need to be mantained by hand (or by an automated process) outside of the realm of Maven’s dependency management.

Why can’t you copy a local repository to the remote? You can, but it won’t work for these dynamic version checks. The problem is that the metadata files are renamed to that of the server id from where a particular version was downloaded. There can be several, depending on the artifact, so you can’t just rename the file back to what Maven is expecting to find.

I’m checking into a couple of options. The first I’ve implemented as a stopgap – a basic wget script that can download the artifact’s complete directory structure. It works, but it’s clunky and doesn’t automatically handle transtitive dependency downloads. The second tool I’m going to testdrive is Archiva

Check back to see the results…

Tags: , ,