Needed when the user didn't actually "install" but is just running it by
using the full path to "gitolite". Without this, every time my code
runs "gitolite <some sub-command>" I have to prefix "gitolite" with
$ENV{GL_BINDIR}, which is kinda painful...
Calling access() changes the CWD to $GL_REPO_BASE!
This causes a problem in the update script -- you're suddenly in the
wrong directory after calling access()!
This is actually happening inside load_1(), so fix that.
(1) testing is very easy, just run this from a clone
t/g3-clean-install-setup-test
BUT BE WARNED THIS IS DESTRUCTIVE; details in t/WARNING
(2) install is equally simple; see 'INSTALL' in the main directory
make it easy to handle syntactic sugar. In summary, compile now calls
parse(sugar('gitolite.conf')).
Details:
- cleanup_conf_line went from subar.pm to common.pm
- explode() and minions went from conf.pm to the new explode.pm
- the callback went away; everyone just passes whole arrays around now
- the new sugar() takes a filename and returns a listref
- all sugar scripts take and return a listref
- the first "built-in" sugar is written (setting gitweb.owner and
gitweb.description)
the new RC file format (of being a hash called %rc) is getting a nice
workout :-)
The rc file used to be a bunch of variables, each one requiring to be
declared before being used. While this was nice and all, it was a
little cumbersome to add a new flag or option.
If you disregard the "catch typos" aspect of having to predeclare
variables, it's a lot more useful to have all of rc be in a hash and use
any hash keys you want.
There could be other uses; for instance it could hold arbitrary data
that you would currently put in %ENV, without having to pollute %ENV if
you don't need child tasks to inherit it.
----
NOTE: I also ran perltidy, which I don't always remember to :)
this is pretty slow if you have thousands of repos, since it has to read
and parse a 'gl-conf' file for every repo. (For example, on a Lenovo
X201 thinkpad with 11170 repos and a cold cache, it took 288 seconds).
(With a hot cache -- like if you run the command again -- it took 2.1
seconds! So if you have a fast disk this may not be an issue for you
even if you have 10,000+ repos).