Are we abusing SourceForge SVN?

I just did a 'svn up' of my jnode tree, and something dawned while I was waiting for the classlib files to download.

They are big files: a total of 32Mb in their compressed form. Each time we update them we use 32Mb on SourceForge.

Now the SourceForge usage policies for SVN disk space usage are pretty relaxed , but what we are doing (checking large binaries into SVN) is not how SVN was designed to be used. In fact it would probably be regarded as "bad practice" ... though I've seen much worse.

There are multiple aspects to this:

  1. At some point the SourceForge staff might notice that our disc usage is sky-rocketing, and they might ask us to stop doing this.
  2. Some people (myself included) have internet connections with a cap on the amount of data we can download per month. Others people have slooowwww internet connections. In either case, regular downloads of +30Mb could become an issue.
  3. And the flip side of the above is that the SourceForge staff might complain about the volume of SVN read traffic we are generating. (Unlike regular downloads, SVN traffic is not spread across mirrors. It is all hitting SourceForge's primary data center and network pipes.)

For the sake of argument, let us assume that this becomes a significant problem for one or more of the above reasons. Can we do anything about it?

I think so. I think that the solution is to remove the big source and binary JARs from SVN, and replace them with version-stamp files that should be updated in the 'jnode' SVN tree each time a significant change is made to the 'classlib' tree. Then modify the 'jnode/all/build.xml' file(s) to compare the version stamps in the jnode and classlib trees to determine if the classlib tree needs to be refreshed. The user might then be prompted to do an 'svn up' and build the classlib codebase. The 'jnode' build could even do this automatically, though that may be dangerous (and problematic given that we collectively use both SVN and GIT). Finally, the 'classlib' build would be changed to drop classlib.jar and classlib-src.jar files into the developer's 'jnode' sandbox for use in 'jnode' builds and for the developer's IDE.

Of course, this means that developers need to have 'classlib' and 'jnode' sandboxes.

What do people think? Is it a problem we should care about? Is the above (the basis of) a workable solution?

I agree that putting those

I agree that putting those large binaries under SVN is not a good solution.
I might upload them to somewhere else though and have the build process download them when needed.
IMHO downloading 32M when the classlib changes (not so often) shouldn't be a problem for a developer nowdays.
What do others think? Share your opinion here.

As far as git is concerned,

As far as git is concerned, i believe this is a bit of a non-issue. Though as long as svn is in the picture, non of the git solutions to this problem can be used. As an FYI, the git way of doing this is via submodules.

To deal with the binary classlib archives, i like the idea of a timestamp and download outside of svn. But dont make it automatic unless the end user sets some config variable saying its ok to do so. If changes are made to the classlib, it should be published, but also be optional wether the user wants to download it. This way a dev can refresh their classlib at their choosing, even if the classlib gets updated multiple times.

The part that i see about this solution is for the dev that wants to work on classlib. As i dont believe an svn client can deal with two svn checkouts in the same working directory. I have an idea for a solution to this, i just need a little time to test a theory.

Updating the classlib cannot

Updating the classlib cannot be made optional in general. This is because the classlib6 code is related to the jnode code. If they are not in sync inconsistent code may result.