Current Projects – Part III: The Maven Dependency Source Scanner

0

Maven has a marvelous dependency management feature which allows an application to declare it’s top level dependencies and then have Maven not only automatically bundle them with the application – but also discover the transitive dependency graph of all of the declared dependencies. This is an extremely useful and time saving process and allows for a bevy of management options when it comes to the project itself.

Tags: ,

Current Projects – Part I: A Preface

0

What do I do?

Well, I’m a software developer / architect by trade. I’ve worked for various consulting companies over the years and have worked for dozens of clients using a half dozen languages on a bevy of platforms…

Now that we have that out of the way, my main focus over the last couple of years – outside of straightforward development – has been continuous integration, or more precisely the implementation of continuous integration in the enterprise.

Don’t know what that is? Check out the wikipedia entry for a decent overview. If you are still so extremely interested after that thrilling description – please continue reading…

Nowadays I mostly work in the Java domain – so keep that in mind when I talk about the toolsets of CI. This is particularly important as the build tool of choice (at least for me and what I’ve implemented for the enterprise at Nationwide Insurance) is Maven – which you can check out at the Apache Maven site…

Maven deals with the “how” of compiling a project and producing an artifact. Unlike other build tools available it is built from the ground up to handle the dependencies of a project (in this case other jar,sar, *ar files). Dependencies are the main headache in any build / deployment scenario. From simply finding out what they are and where they are located to what version of which is needed (including their transitive dependencies).

It can be very nightmarish…

In any case, this series will not be a Maven tutorial. The projects I’ve created are for use (mainly) with Maven and I will go over the pertinents in a JIT manner (just-in-time for you non-java folk) so that you won’t be completely lost 🙂

I’m just getting back into blogging – so be gentle…

Next Up – The jar-indexer

Ruby Can Save The World!!!

0

(This is an excerpt from an email, but I felt that rant wasn’t in a public enough forum)…

*** Starting Rant ***

“Why do we continue to re-invent the (broken) wheel?”

The talking heads continue to keep themselves popular on the conference circuit by declaring “new” languages sexy. Where was Martin Fowler when I was playing around with Ruby in the late 90’s? Declaring Java the next sexy language.

Do I like Ruby / Groovy / ${insert cool language here} ?

Sure, they’re all cool.

I like Lua myself, and have had fun the perl, python, smalltalk, ADA, and a score of other languages in the past. But how about we stop trying to fix the symptoms of our programmatic issues by creating new languages and instead focus on the REAL problems behind them.

Why does web development suck compared to the days of desktop development? First and foremost is the transport, and not far behind is the statelessness. Statelessness has been solved somewhat, but why is it that web development has been going on solid since the 90’s and we still have to be concerned with this paradigm of request-response? Why hasn’t this communication layer been abstracted away so that development can focus on the important aspects of the project.

JSF and other overly complicated frameworks may help with that issue but introduce plenty of problems of their own. We’re absolutely destroying the KISS principle every time we adopt the next flashy framework or language that promises easy set up and next to nothing maintenance – these always have a cost and rarely fix the problem.

How do we fix this? Not with a language or a framework, but a tool. A new browser. One that is actually INTENDED to serve applications and be more than a terminal (and no, Flex, OpenLazlo, etc. won’t suffice – those are shoehorn patches). At least that would be a great start and the best first step. Then follow that up with some other improvements to ease data access, configuration, etc…

*** Rant Wind Down ***

Above all I’m beginning to tire of the declarations, musings, and “The Thinker” posings of the Fowler’s, Eckel’s, and Cockburn’s of the world. Look closely (or not so closely) and it’s apparent they have their own agendas. They are paid extremely well to pontificate and then talk about their pontifications on the circuit. They are paid just as well the next year on said circuit when they refute what they had spewed just 12 months before…

Not that these guys don’t have valuable insight and a vast amount of experience, but they make mistakes – consistently. That’s why every other year they are touting the next greatest technology… A good example was the panel of last year’s NFJS downright berating Struts – a framework they were in love with only a year or two before.

So my advice is take what they preach, decide on your own if it makes sense, and then either disregard it or tuck it away for use later. Some of the ideas bandied about simply aren’t practical (pair-programming being one of them) while others are just goofy (writing a test for a class that doesn’t exist just to watch the test (amazingly) fail).

Let the TDD zealots and XP / Agile adepts begin posting!

*** End Rant ***

I apologize for this rant but I’m mired in writing documentation and I have reached my boiling point and have had my fill of Magic Cure All Languages (MCAL for short)…

Also, feel free to take my advice above and disregard any and / or all of my musings 🙂

Tags:

Maven 2 Remote Repositories – Part II

0

It appears that archiva doesn’t work right out of the box – at least not for it’s current version. After downloading and building the project it was still throwing configuration exceptions and wouldn’t deploy. So I searched around jira and found a fix for the bug. After following the prescribed steps and creating my own basic archiva.xml in my .m2 directory it worked, at least the test did…

When I continued on to deploying the standalone version to my destination server there was another issue – a NamingException. Turns out someone checked in the plexus.xml config that duplicated a datasource. I just had to go to the conf/plexus.xml file and fix it… I crossed my fingers, closed my eyes, and ran the run.sh script…

It worked!

Now for configuration…

Follow the directions to set up your managed repositories and the repositories that they proxy. Pretty straightforward and works out of the box. The tricky part is setting up your settings.xml

It appears that at this time just setting up mirrors doesn’t work unto itself. Mirroring works for any non-plugin repositories. However, for each plugin repository you will need to set up pluginRepository elements in a profile. This is clunky and will hopefully get worked out as the product matures.

The last tidbit that took me a while to figure out is this: Any connection to the managed archiva repository is expected to be secure – meaning it wants a userid and password. This was not abundantly clear in the documentation… You need to set up a server entry in your settings.xml for each mirror / pluginRepository that you you plan on proxying. The userid and password are those that are defined in archiva. I simply defined a maven-user user with no password and assigned to it the role of Repository Observer.

Once you have these set up you are good to go!

Tags: , ,

Maven 2 Remote Repositories

0

In Maven 1.x the repositories were simple – there wasn’t a difference between a local repository and a remote repository. The layouts were the same and there wasn’t additional information in one that wasn’t contained in the other. The only variant was where the repository was located.

In Maven 2.x that all changed. With the addition of transitive dependencies everything got a little more complicated. I will attempt to explain…

A remote repository, and local for that matter, contain a few more files. The obligatory jars are still there, as are deployed poms. The additional files come in the way of metadata and their checksums.

Each artifact has at it’s root level (i.e. not by version) a maven-metadata.xml file (on the server) or multiple maven-${serverId}-metadata.xml files that contain all the releases of the artifact, as well as the latest and released versions and it’s deployed timestamp (on the remote) or it’s downloaded timestamp (on the local).

These files are used for a couple of things. The first is to allow maven to check for updates based on time. If you have repositories in your settings.xml or POM that allow updates (daily for example) Maven will check these timestamps and compare local versus remote to determine if a download is required. The second use is that of when a dependency is declared without a version. Maven will first check the local repository and it’s metadata to determine what the latest version of the artifact is and download if necessary.

This poses a small problem when trying to create an enterprise remote repository that doesn’t allow access to the internet at large. These metadata files need to be mantained by hand (or by an automated process) outside of the realm of Maven’s dependency management.

Why can’t you copy a local repository to the remote? You can, but it won’t work for these dynamic version checks. The problem is that the metadata files are renamed to that of the server id from where a particular version was downloaded. There can be several, depending on the artifact, so you can’t just rename the file back to what Maven is expecting to find.

I’m checking into a couple of options. The first I’ve implemented as a stopgap – a basic wget script that can download the artifact’s complete directory structure. It works, but it’s clunky and doesn’t automatically handle transtitive dependency downloads. The second tool I’m going to testdrive is Archiva

Check back to see the results…

Tags: , ,