Maven 2 Remote Repositories – Part II

0

It appears that archiva doesn’t work right out of the box – at least not for it’s current version. After downloading and building the project it was still throwing configuration exceptions and wouldn’t deploy. So I searched around jira and found a fix for the bug. After following the prescribed steps and creating my own basic archiva.xml in my .m2 directory it worked, at least the test did…

When I continued on to deploying the standalone version to my destination server there was another issue – a NamingException. Turns out someone checked in the plexus.xml config that duplicated a datasource. I just had to go to the conf/plexus.xml file and fix it… I crossed my fingers, closed my eyes, and ran the run.sh script…

It worked!

Now for configuration…

Follow the directions to set up your managed repositories and the repositories that they proxy. Pretty straightforward and works out of the box. The tricky part is setting up your settings.xml

It appears that at this time just setting up mirrors doesn’t work unto itself. Mirroring works for any non-plugin repositories. However, for each plugin repository you will need to set up pluginRepository elements in a profile. This is clunky and will hopefully get worked out as the product matures.

The last tidbit that took me a while to figure out is this: Any connection to the managed archiva repository is expected to be secure – meaning it wants a userid and password. This was not abundantly clear in the documentation… You need to set up a server entry in your settings.xml for each mirror / pluginRepository that you you plan on proxying. The userid and password are those that are defined in archiva. I simply defined a maven-user user with no password and assigned to it the role of Repository Observer.

Once you have these set up you are good to go!

Tags: , ,

Maven 2 Remote Repositories

0

In Maven 1.x the repositories were simple – there wasn’t a difference between a local repository and a remote repository. The layouts were the same and there wasn’t additional information in one that wasn’t contained in the other. The only variant was where the repository was located.

In Maven 2.x that all changed. With the addition of transitive dependencies everything got a little more complicated. I will attempt to explain…

A remote repository, and local for that matter, contain a few more files. The obligatory jars are still there, as are deployed poms. The additional files come in the way of metadata and their checksums.

Each artifact has at it’s root level (i.e. not by version) a maven-metadata.xml file (on the server) or multiple maven-${serverId}-metadata.xml files that contain all the releases of the artifact, as well as the latest and released versions and it’s deployed timestamp (on the remote) or it’s downloaded timestamp (on the local).

These files are used for a couple of things. The first is to allow maven to check for updates based on time. If you have repositories in your settings.xml or POM that allow updates (daily for example) Maven will check these timestamps and compare local versus remote to determine if a download is required. The second use is that of when a dependency is declared without a version. Maven will first check the local repository and it’s metadata to determine what the latest version of the artifact is and download if necessary.

This poses a small problem when trying to create an enterprise remote repository that doesn’t allow access to the internet at large. These metadata files need to be mantained by hand (or by an automated process) outside of the realm of Maven’s dependency management.

Why can’t you copy a local repository to the remote? You can, but it won’t work for these dynamic version checks. The problem is that the metadata files are renamed to that of the server id from where a particular version was downloaded. There can be several, depending on the artifact, so you can’t just rename the file back to what Maven is expecting to find.

I’m checking into a couple of options. The first I’ve implemented as a stopgap – a basic wget script that can download the artifact’s complete directory structure. It works, but it’s clunky and doesn’t automatically handle transtitive dependency downloads. The second tool I’m going to testdrive is Archiva

Check back to see the results…

Tags: , ,