--- /dev/null
+.*\.do$
+.*\.plantuml$
+^bass\.html
+^clean$
--- /dev/null
+A [Index/Concepts]\r
+Build process *must* not depend on the Internet access. You must be able
+to preliminary download all necessary source code needed for the skel.
+And as in most package build systems, that retrievable source code is
+called "distfile".
+
+There are no requirements how you will get it. But in most cases that is
+either downloadable tarball or archive created from specific VCS commit.
+Modern DVCS'es commit is already self integrity checked tree of files.
+But tarball is just a bunch of bytes. You have to provide the URL where
+to get it, and checksum(s) to verify against, to be sure that it is not
+tampered or altered somehow. Moreover many distribution sites also
+provide a detached cryptographic signature of the tarball, which you can
+verify against some know author's public key.
+
+That is why
+=> https://datatracker.ietf.org/doc/html/rfc5854 Metalink4\r
+files are used to get tarballs. They also tend to contain checksum hash
+present on the download website.
+
+A [Index/Programs] pack\r
+You can download all distfiles by invoking "redo distfiles/all"
+target. An archive with all of them can be created with
+"distfiles/pack >distfiles.tar" command.
+
+.meta4 files can be processed by either of three programs:
+
+* $META4_FETCHER=meta4ra-check
+ Use "meta4ra-check -dl 0" command to download the first URL. This is
+ by default, because meta4ra utilities are anyway installed already. It
+ won't try to download other URLs, unlike other fetch options there!
+
+* $META4_FETCHER=wget
+ Use wget compiled with --with-metalink option. The only drawback is
+ that most OS distributions contain Wget without that (--input-metalink)
+ option.
+
+* $META4_FETCHER=aria2c
+ => http://aria2.github.io/ Aria2\r
+ Unfortunately it sometimes fails to deal with links on GitHub.com.
--- /dev/null
+A [Index/Concepts]\r
+Generally installation of a skelpkg is just an unpacking of the bin
+archive to skelbins directory and creating a symbolic links to files
+inside it. But there is ability to run "pre install" (preinst), "post
+install" (postinst), "pre remove" (prerm) and "post remove" (postrm)
+hooks.
+
+Hook is a directory with at least one executable file. All executable
+files in that directory are called in a lexicographical order. Each hook
+is placed in $NAME-$hsh/skelpkg/$NAME-$hsh/hooks/$hook directory.
+
+Hook is executed inside the directory we performing skelpkg
+installation, directory with the local/ subdirectory. It expects to get
+following environmental variables:
+
+* $DST
+ Path to directory where we perform installation of the skelpkg.
+* $PKG
+ Name of the skelpkg user entered. As a rule it is more-or-less human
+ readable name without any hashes.
+* $NAMENHASH
+ $NAME-$hsh name of the package.
+* $BASS_ROOT, $BASS_RC, ...
+
+A [Index/Concepts] preinst-example\r
+One of the frequent uses of preinst hook is installation of runtime
+dependencies. For example cURL depends on OpenSSL, so let's see its
+hook:
+
+ $ tar xfO $SKELPKGS/$ARCH/curl-8.6.0 name | read namenhash
+
+ $ tar xfO $SKELPKGS/$ARCH/curl-8.6.0 bin |
+ tar tf - $namenhash/skelpkg/$namenhash/hooks/preinst
+ $namenhash/skelpkg/$namenhash/hooks/preinst/010-rdeps
+
+ $ tar xfO $SKELPKGS/$ARCH/curl-8.6.0 bin |
+ tar xfO - $namenhash/skelpkg/$namenhash/hooks/preinst/010-rdeps
+ #!/bin/sh -e
+ exec "$BASS_ROOT"/build/bin/pkg-inst openssl-1.1.1w
+
+A [Index/Concepts] postinst-example\r
+postinst hook can be used to alter $DST's rc file, like pkgconf skelpkg does:
+
+ $ tar xfO $SKELPKGS/$ARCH/pkgconf-2.1.1 name | read namenhash
+ $ tar xfO $SKELPKGS/$ARCH/pkgconf-2.1.1 bin |
+ tar xfO - $namenhash/skelpkg/$namenhash/hooks/postinst/01rc-add
+ #!/bin/sh -e
+ _localpath="$(realpath local)"
+ cat >>rc <<EOF
+ PKG_CONFIG_PATH="$_localpath/lib/pkgconfig:\$PKG_CONFIG_PATH"
+ PKG_CONFIG_PATH="$_localpath/libdata/pkgconfig:\$PKG_CONFIG_PATH"
+ export PKG_CONFIG_PATH
+ EOF
--- /dev/null
+A [Index/Concepts]\r
+Build system, except for basic commands like cc, (POSIX) make, requires
+at least:
+
+ A [Index/Programs] bsdtar\r
+* bsdtar
+ GNU tar, available by default on GNU OSes, brings many complications
+ in build-related scripts, because it is unable to decompress stdin
+ data on the fly. Many GNU/Linux distributions have libarchive-tools
+ package, containing libarchive-based bsdtar utility, that perfectly
+ deals with any compressed archive transparently.
+
+ A [Index/Programs] meta4ra\r
+=> http://www.meta4ra.stargrave.org/ meta4ra\r
+ Utilities for making and checking .meta4 files. They are just a
+ wrapper over XML and external hasher commands interoperation. They
+ also can be used for downloading.
+
+ A [Index/Programs] Perl\r
+=> https://www.perl.org/ Perl\r
+ Shell scripts are hard to write in a portable way. For example there
+ is no way to know file's size using POSIX-compatible utilities solely,
+ except for feeding through "wc -c". There is no way to get file's
+ mtime, as "stat"'s options are completely different on BSD and GNU
+ systems. Only small subset of features is common among sed, awk, grep
+ utilities. There is no reliable portable way of using "sed -i" for
+ example.
+
+ That is why the only sane option is Perl in most cases. Its
+ interpreter is minimalistic enough and tend to be included even in
+ OpenWRT distributions. It behaves the same way on all widespread OSes
+ (no zoo of loosely compatible dialects).
+
+ A [Index/Programs] redo\r
+=> http://cr.yp.to/redo.html redo build system\r
+ redo started to be used in the project due to its built-in ability of
+ using locks to prevent concurrent building of the same target.
+ skelbins have to be installed to permanent paths, so concurrent builds
+ will ruin them. redo contains atomic reliable writes of the target's
+ result. And by definition it has dependency tracking of the targets.
+ All of that greatly reduces the skel's code size. And as an unexpected
+ feature you get ability to parallelise your builds.
+
+ A [Index/Programs] goredo\r
+ => http://www.goredo.cypherpunks.su/ goredo\r
+ is recommended implementation, being very fast and having largest
+ integration test suite.
+
+ A [Index/Programs] setlock\r
+ A [Index/Programs] lockf\r
+ A [Index/Programs] flock\r
+* Any of setlock from daemontools, or lockf from BSD, or flock from GNU.
+
+ A [Index/Programs] Go\r
+=> https://go.dev/ Go\r
+ At least for building supplementary utilities. And, obviously, if you
+ build Go-related software. Actually Go-written utilities can be
+ replaced and no Go dependency will be required at all.
+
+ A [Index/Programs] fetch\r
+ A [Index/Programs] Wget\r
+ A [Index/Programs] cURL\r
+* FreeBSD's fetch, or
+ => https://www.gnu.org/software/wget/ GNU Wget\r
+ => https://curl.se/ cURL\r
+ Although meta4ra can be used instead all of them.
--- /dev/null
+One of the most trivial and simple skel of hello world program can be
+made with the following skel in skel/hw.do:
+
+ [ -n "$BASS_ROOT" ] || BASS_ROOT="$(dirname "$(realpath -- "$0")")"/../../..
+ sname=$1.do . "$BASS_ROOT"/lib/rc
+ . "$BASS_ROOT"/build/skel/common.rc
+
+ mkdir -p "$SKELBINS"/$ARCH/$NAME/bin
+ cd "$SKELBINS"/$ARCH
+ cp ~/src/misc/hw/hw.pl $NAME/bin
+ "$BASS_ROOT"/build/lib/mk-pkg $NAME
+
+But let's write a skel and build a skelpkg for convenient
+=> https://www.gnu.org/software/parallel/ GNU parallel\r
+
+ A [Index/Variables] BASS_RC\r
+* Go to build/ subdirectory, and create configuration file. I tend to
+ call it rc. Set $BASS_RC environment variable with the path to it:
+
+ $ cd build/
+ $ cat >rc <<EOF
+ MAKE_JOBS=8
+ SKELBINS=/tmp/skelbins
+ EOF
+ $ export BASS_RC=`realpath rc`
+
+ A [Index/Variables] ARCH\r
+ A [Index/Variables] SKELBINS\r
+ A [Index/Variables] BASS_REV\r
+ You can look for variables you can set in lib/rc. One of the most
+ important variable is $ARCH, which sets what architecture you are
+ using. If you have got non-Git capable checkout, then probably you
+ should also set $BASS_REV to some dummy string. When it is changed --
+ your skelbin's hashes too. $SKELBINS path is crucial to be the same as
+ it will be on the slaves.
+
+* Prepare distfile download rule. distfiles/ directory contains
+ default.do target, so it will be executed by default for every target
+ in it (unless that target has its own .do file). And by default that
+ target will download file based on corresponding .meta4 file nearby.
+
+ According to parallel's homepage, it is advised to use GNU mirrors for
+ downloading, so let's take its latest release with the signature:
+
+ $ wget https://ftpmirror.gnu.org/parallel/parallel-20240122.tar.bz2
+ $ wget https://ftpmirror.gnu.org/parallel/parallel-20240122.tar.bz2.sig
+
+ Its .sig file contains non-signature related commentary, that we
+ should strip off:
+
+ $ perl -i -ne 'print if /BEGIN/../END/' parallel-20240122.tar.bz2.sig
+
+ Many software provides signatures in binary format, that could be
+ easily converted with "gpg --enarmor <....sig >....asc".
+
+ A [Index/Programs] meta4ra-create\r
+ Then we must create corresponding Metalink4 file, which includes
+ signature, URL(s) and checksums. I will use meta4ra-create utility for
+ that purpose:
+
+ $ meta4ra-create \
+ -fn parallel-20240122.tar.bz2 \
+ -sig-pgp parallel-20240122.tar.bz2.sig \
+ https://ftpmirror.gnu.org/parallel/parallel-20240122.tar.bz2 \
+ <parallel-20240122.tar.bz2 >parallel-20240122.tar.bz2.meta4
+
+ A [Index/Concepts] skel-example\r
+* Write the skel file skel/sysutils/parallel-20240122.do itself:
+
+ [ -n "$BASS_ROOT" ] || BASS_ROOT="$(dirname "$(realpath -- "$0")")"/../../../..
+ sname=$1.do . "$BASS_ROOT"/lib/rc
+ . "$BASS_ROOT"/build/skel/common.rc
+
+ bdeps="rc-paths stow archivers/zstd devel/gmake-4.4.1"
+ rdeps="lang/perl-5.32.1"
+ redo-ifchange $bdeps "$DISTFILES"/$name.tar.bz2 $rdeps
+ hsh=$("$BASS_ROOT"/build/bin/cksum $BASS_REV $spath)
+ . "$BASS_ROOT"/build/lib/create-tmp-for-build.rc
+ "$BASS_ROOT"/build/bin/pkg-inst $bdeps $rdeps
+ . ./rc
+ $TAR xf "$DISTFILES"/$name.tar.bz2
+ "$BASS_ROOT"/bin/rm-r "$SKELBINS"/$ARCH/$NAME-$hsh
+
+ cd $NAME
+ ./configure --prefix="$SKELBINS"/$ARCH/$NAME-$hsh --disable-documentation >&2
+ perl -i -ne 'print unless /^\s+citation_notice..;$/' src/parallel
+ gmake -j$MAKE_JOBS >&2
+ gmake install >&2
+
+ cd "$SKELBINS"/$ARCH
+ "$LIB"/prepare-preinst-010-rdeps $NAME-$hsh $rdeps
+ mkdir -p $NAME-$hsh/skelpkg/$NAME-$hsh/hooks/postinst
+ cat >$NAME-$hsh/skelpkg/$NAME-$hsh/hooks/postinst/01will-cite <<EOF
+ #!/bin/sh
+ echo yeah, yeah, will cite >&2
+ EOF
+ chmod +x $NAME-$hsh/skelpkg/$NAME-$hsh/hooks/postinst/01will-cite
+ "$BASS_ROOT"/build/lib/mk-pkg $NAME-$hsh
+
+ A [Index/Programs] mk-arch\r
+* Create a link to it in skelpkgs's directory for the given
+ architecture. You can use pkg/mk-arch to conveniently create $ARCH
+ directory and link all missing skels to it:
+
+ $ pkg/mk-arch
+
+* Run the skelpkg creation job itself:
+
+ $ redo pkg/FreeBSD-amd64-13.2-RELEASE/sysutils/parallel-20240122
+
+* Check and confirm that created file looks like a skelpkg:
+
+ % tar xfO pkg/FreeBSD-amd64-13.2-RELEASE/sysutils/parallel-20240122 bin | tar tf -
+ parallel-20240122-xhVYojyMWD8XeHTuTe44q1NyHI2b_l5fKsopunYFzkc/
+ parallel-20240122-xhVYojyMWD8XeHTuTe44q1NyHI2b_l5fKsopunYFzkc/bin/
+ parallel-20240122-xhVYojyMWD8XeHTuTe44q1NyHI2b_l5fKsopunYFzkc/bin/env_parallel
+ parallel-20240122-xhVYojyMWD8XeHTuTe44q1NyHI2b_l5fKsopunYFzkc/bin/[...]
+ parallel-20240122-xhVYojyMWD8XeHTuTe44q1NyHI2b_l5fKsopunYFzkc/bin/parallel
+
+Let's describe what is happening in the skel:
+
+* As .do file is not executable and does not have shebang, in most
+ popular redo implementations it will be started with the "/bin/sh -e"
+ command. So it is POSIX shell script file. But you are free to use any
+ interpreted language or even build and even compile .do file itself
+ with another .do.
+
+* Nearly all BASS scripts and programs require you to set $BASS_ROOT
+ (path to the root directory of the BASS project (build/, master/,
+ slave/)) and $BASS_RC, which was already set by you before. $BASS_ROOT
+ generally is set by invoking script itself, considering its own path
+ in BASS'es hierarchy.
+
+ Line with $BASS_ROOT setting just can be copy-pasted among all skels.
+
+ A [Index/Programs] common.rc\r
+* common.rc checks if we are running under pkg/ directory, not skel/
+ one. If already prebuilt target result exists in
+ pkg/.../prebuilt/$PKG, then it is hardlinked as a result. Be *aware*
+ that it also changes current working directory to $SKELPKGS, so it can
+ depend on subdir/pkg packages.
+
+ A [Index/Variables] sname\r
+ A [Index/Programs] lib-rc\r
+* Nearly all BASS scripts and programs also assume that you will source
+ its $BASS_ROOT/lib/rc file, which sets various common variables. And
+ it expects you to pass $sname variable with current scripts name.
+
+ It will check if $BASS_RC is specified and set:
+
+ A [Index/Variables] NAME\r
+ * $NAME
+ Base name of the script/skel, without .do extension.
+
+ A [Index/Variables] SPATH\r
+ * $SPATH
+ Full path to the invoking script itself.
+
+ * $ARCH
+ Current machine's architecture, what it builds for.
+
+ * $SKELBINS
+ Path to directory with unpacked skelbins.
+
+ A [Index/Variables] SKELPKGS\r
+ * $SKELPKGS
+ Path to directory containing built $ARCH/$SKELPKG skelpkgs.
+
+ A [Index/Variables] MAKE_JOBS\r
+ * $MAKE_JOBS
+ Number of Make's parallel jobs. Cane be passed to "make".
+
+ A [Index/Variables] DISTFILES\r
+ * $DISTFILES
+ Path to $BASS_ROOT/build/distfiles directory.
+
+ * $BASS_REV
+ Current BASS'es source code revision.
+
+ A [Index/Variables] SETLOCK\r
+ A [Index/Variables] META4RA_HASHES\r
+ A [Index/Variables] FSYNC\r
+ A [Index/Variables] TAR\r
+ A [Index/Variables] TMPDIR\r
+ * $SETLOCK, $META4RA_HASHES, $FSYNC, $TAR, $TMPDIR
+
+ And of course they could be overridden in most cases with your $BASS_RC.
+
+* $bdeps and $rdeps are just a convenient variables not to repeat them
+ multiple in the whole script. parallel requires Perl during build and
+ runtime. I called it "runtime dependency". Actually it builds
+ perfectly with POSIX/BSD make, but as an exercise we assume that it
+ builds only with GNU make, so we also remember that is is "build
+ dependency".
+
+ Nearly every skel requires rc-paths (see below), stow and zstd skelpkgs.
+
+ A [Index/Programs] skel-stow\r
+ stow skelpkg is very special: it can be build without invoking GNU
+ Make and dependant Perl. It also can be installed by pkg-inst even if
+ no Stow is installed: it will stow self in postinst hook. Also, it
+ installs perl skelpkg only if it exists, so that way it works on a
+ clean build system.
+
+ A [Index/Programs] skel-zstd\r
+ zstd skelpkg makes zstd* compressor available, when invoking mk-pkg
+ command to create the resulting skelpkg. Only a few skelpkgs use gzip
+ compressor.
+
+* Then we call redo-ifchange to assure that our distfile exists
+ (otherwise it will be downloaded), as dependency packages too.
+ Remember that redo guarantees that it will run script in the directory
+ it lives? So here are dependent packages too. If any of them does not
+ exist, then redo will invoke its building the same way as we invoked
+ building of the parallel skelpkg. It also assures that rc-paths, stow
+ and zstd skelpkgs exist too.
+
+ A [Index/Programs] cksum\r
+* Next we compute current skelpkg's hash. Currently it is used solely to
+ create a different hash if either BASS commit, or skel itself changes.
+ $BASS_ROOT/build/bin/cksum utility takes any number of arguments, each
+ of which is either string, or paths to file. cksum will hash all that
+ information.
+
+ A [Index/Programs] create-tmp-for-build\r
+* Then we source $BASS_ROOT/build/lib/create-tmp-for-build.rc helper.
+ * it creates and changes to temporary directory ($tmp)
+ * makes a trap to remove it in case of errors and exit
+ * creates local/ subdirectory
+
+ A [Index/Programs] pkg-inst\r
+* Then it installs build and runtime dependencies with
+ $BASS_ROOT/build/bin/pkg-inst command.
+
+ Pay attention that initially it installs stow skelpkg, that is
+ required virtually by any other skelpkg for working properly. So the
+ order of skels is very important there.
+
+ A [Index/Programs] rc-paths\r
+* Then it sources ./rc file there. When did it appear? Each skelpkg can
+ contain postinst hooks. rc-paths skelpkg is used solely for its
+ side-effect of postinst hook, that creates that rc file with altered
+ $PATH,
+ $MANPATH,
+ $INFOPATH,
+ $LD_LIBRARY_PATH,
+ $CFLAGS,
+ $CXXFLAGS,
+ $LDFLAGS
+ variables, making them aware of local/ hierarchy.
+
+ When we install pkgconf skelpkg, then it appends altering of
+ $PKG_CONFIG_PATH variable in its postinst hook too.
+
+ Now we are aware of various installed packages and specific
+ environment variables.
+
+* Unpack the distfile with $TAR to current temporary directory.
+
+ A [Index/Programs] rm-r\r
+* Remove existing skelbin if it exists. In theory, each time you make
+ any modifications to your skels, you make a commit in BASS repository,
+ thus changing the $BASS_REV and corresponding $hsh value. So each time
+ your new skelbin build should be in different directory. But when you
+ are developing your skel, no commits and hashes are changed. Moreover
+ your previous build attempt may fail due to I/O or system error.
+
+ Because of redo's lockfiles it should be safe to remove existing
+ skelbin, because noone must be using it.
+
+ Why not trivial "rm -fr"? Because skelbins are forced to be read-only
+ directories, that is why you just won't have enough permissions to
+ remove them. $BASS_ROOT/bin/rm-r deals with it.
+
+* Go to the unpacked directory and ./configure the program at last! Pay
+ attention to proper installation to immutable/permanent path under
+ $SKELBINS/$ARCH/$NAME-$hsh.
+
+ Remember that any output to stdout is saved by redo as a result of the
+ target! So do not forget to redirect messages to stderr or silence
+ them at all.
+
+* GNU Parallel has possibly annoying and disturbing advertisement about
+ its citing. Let's patch it and remove the annoying code. You can do
+ whatever your want with the code. Download a patch in distfiles and
+ use it here. Keep some source code nearby in skels directory. No
+ limitations.
+
+* Call gmake to build it up. Because gmake skelpkg is installed, that
+ command will be available under that name even on GNU systems. Use
+ $MAKE_JOBS if it is appropriate and safe to be used. Aware that many
+ programs can not be built in parallel build mode.
+
+* As parallel requires Perl at runtime, we need to assure it is
+ installed when our skelpkg is going to be installed. Create a preinst
+ [Build/Hooks] for that purpose, that will call pkg-inst command.
+
+ A [Index/Programs] prepare-preinst-010-rdeps\r
+ Because runtime dependencies are often used hooks, there is
+ prepare-preinst-010-rdeps for that. Its arguments will be converted to
+ corresponding skelpkg/$namenhash/hooks/preinst/010-rdeps executable file.
+
+* Just as a practice, let's also create postinst hook, that will print
+ our promise that we will cite that GNU Parallel. Just an ordinary
+ 01will-cite script. After parallel-20240122 skelpkg installation, you
+ will see that promise message.
+
+ A [Index/Programs] mk-pkg\r
+* And at last we are ready to create the final skelpkg from our existing
+ skelbin directory. $BASS_ROOT/build/lib/mk-pkg takes a name of the
+ directory you need to pack in skelpkg. It will automatically include
+ necessary "name" and "buildinfo" files with corresponding .meta4
+ files.
+
+ mk-pkg outputs skelpkg to the stdout, that is not explicitly captured
+ in that redo target, passing it through to the redo itself, making it
+ the resulting skelpkg.
-@node Build
-@cindex build system
-@unnumbered Build
-
+A [Index/Concepts]\r
Build system is a computer that compiles various software and packs it
the way easy installable on a remote system. Build system's
architecture/OS is the same as slave's one. There is no
cross-compilation ability. If you want to support multiple OSes, then
use multiple build machines.
-@include build/skel.texi
-@include build/skelbin.texi
-@include build/skelpkg.texi
-@include build/hooks.texi
-@include build/skelenv.texi
-@include build/requirements.texi
-@include build/distfiles.texi
-@include build/tutorial.texi
+[Build/skel]
+[Build/skelbin]
+[Build/skelpkg]
+[Build/Hooks]
+[Build/skelenv]
+[Build/Requirements]
+[Build/Distfiles]
+[Build/Tutorial]
-@node skel
-@cindex skel
-@section skel
-
+A [Index/Concepts]\r
Software build rules are placed in a so-called "skel" (short of
"skeleton"). By default, each skel is just a shell script, telling how
to get the source code, how to configure, compile and install it. If
too many cases it can not be some temporary directory you create during
the build. Because much software tend to hardcode various configured
installation paths and it just won't work from other place. That is why,
-like Nix does, skel is installed to some @strong{permanent} path. If you
-have got enough RAM, it could be just a directory in @command{tmpfs}.
+like Nix does, skel is installed to some *permanent* path. If you have
+got enough RAM, it could be just a directory in tmpfs.
Actually skels are not shell-scripts, but
-@url{http://cr.yp.to/redo.html, redo} targets. So they could be written
-on any other language, even be compiled programs. @command{redo} is used
-because of its built-in locking capabilities and ability to run jobs in
-parallel.
+=> http://cr.yp.to/redo.html redo\r
+targets. So they could be written in any other language, even be
+compiled programs. redo is used because of its built-in locking
+capabilities and ability to run jobs in parallel.
--- /dev/null
+A [Index/Concepts]\r
+Built skel is called "skelbin". Each skelbin is placed in its own
+directory, containing nothing more than just that installed skelbin. In
+most cases that is done trivially by specifying --prefix=$SKELBINS/$NAME-$hsh
+in the skel. $NAME is the skel's name, package's name, something like perl-5.32.1.
+
+A [Index/Concepts] namenhash\r
+$hsh is a supplementary hash value used to distinguish different
+builds/revisions of the same package. Currently it is just a hash of the
+skel itself and BASS'es current commit revision. It is URL-safe Base64
+encoded string. So for example if $SKELBINS is /tmp/skelbins directory,
+then that Perl skelbin is installed to:
+/tmp/skelbins/perl-5.32.1-zP3IpCa_XY7pGHCNYQxp_1KjQQNCyUl84LqSrWLErjA.
+$NAME-$hsh is often called "namenhash" in the code.
+
+A [Index/Concepts] GNU-Stow\r
+But that is just a single software skelbin directory. What if my another
+skel requires multiple other skelbins, depends on them?
+https://www.gnu.org/software/stow/, GNU Stow helps there. That is simple
+symbolic links manager. Assume you have got /tmp/skelbins/perl5-$hsh0
+and /tmp/skelbins/gmake-4.4-$hsh1 skelbins and you current working
+directory is /tmp/tmp.whatever. stow is used to create symlinks from
+dependant skelbins to our current's local/ subdirectory that way:
+
+ /tmp/tmp.whatever/local/bin/gmake -> /tmp/skelbins/gmake-4.4-$hsh1/bin/gmake
+ /tmp/tmp.whatever/local/bin/perl5 -> /tmp/skelbins/perl5-$hsh0/bin/perl5
+ /tmp/tmp.whatever/local/lib/site_perl -> /tmp/skelbins/perl5-$hsh0/lib/site_perl
+ /tmp/tmp.whatever/local/share/info -> /tmp/skelbins/gmake-4.4-$hsh1/share/info
+ [...]
+
+If you add $tmp/local/bin to your $PATH and $tmp/local/lib to
+$LD_LIBRARY_PATH, then both gmake and perl will be available to that
+local build and work perfectly. Alter $CFLAGS, $LDFLAGS,
+$PKG_CONFIG_PATH and in most cases the whole building environment will
+be aware about those skelbins.
--- /dev/null
+A [Index/Concepts]\r
+skelpkgs are installed in so-called "skelenv" (skel-environment).
+skelenv is a directory with at least local/ directory.
+
+A [Index/Programs] pkg-inst\r
+With "pkg-inst $PKG" command you can "install" skelpkgs to that skelenv.
+Installation procedure checks if skelbin (unpacked skelbin) already
+exists in $SKELBINS. Unpacks skelpkg if it is not. Then is runs preinst
+hook, stow, postinst hook.
+
+For pkg-inst usage, skelpkgs/$PKG subdirectory is created in skelenv, with:
+
+* lock -- used by pkg-inst and pkg-rm commands themselves.
+* namenhash -- $NAME-$hsh
+* preinst.done, postinst.done, prerm.done, postrm.done -- to track if
+ corresponding hooks were successfully executed (if they exist)
+
+Only one version of the skelpkg can be installed. You can not use two
+skelpkgs with varying hashes inside them.
+
+Some skelpkgs tend to create and alter "rc" file in the root of skelenv.
+If is aimed to be sourced by your shell, to modify various environment
+variables for ease of skelenv usage.
+
+A [Index/Programs] pkg-rm\r
+"pkg-rm $PKG" command can be used to remove skelpkgs from your skelenv.
+Basically it is just un-stow-ing of them and removing the skelpkgs/$PKG,
+with consideration of the *rm hooks.
+
+A [Index/Programs] mk-skelenv\r
+mk-skelenv can be used to create skelenv in current directory. It
+creates local/ directory and installs rc-paths and stow packages.
--- /dev/null
+A [Index/Concepts]\r
+skelbins are not appropriate to be distributable as is, as a directories
+with bunch of files. That could be fragile due to network filesystem
+limitations. That is slow, because some skelbins already contains tens
+of thousands of files. And additional metadata has to be supplied with
+the skelbin. Your build steps are not aware about the exact $hsh values
+of the package and it would be insane to hardcode and repeatedly update
+after each BASS/skel's change. And skelbin can depend on another skelbin
+to work (runtime dependency).
+
+A [Index/Concepts] format\r
+That is why, we have to use some kind of distribution format for solving
+the issues above. "skelpkg" is a packed skelbin with additional
+metadata. Similarly to Arch Linux and
+=> https://www.gentoo.org/glep/glep-0078.html Gentoo\r
+skelpkg is a single file, uncompressed POSIX pax archive with following
+entries:
+
+* name, name.meta4
+ Full name of the skelbin directory, $NAME-$hsh.
+ With an optional checksum file.
+* buildinfo, buildinfo.meta4
+ Just a textual information how that skelbin/skelpkg was built.
+ Currently just a current BASS'es commit revision.
+* bin.meta4, bin
+ Compressed POSIX pax archive containing the skelbin
+ ($NAME-$hsh/ directory hierarchy).
+
+A [Index/Concepts] pax-archive\r
+A [Index/Concepts] ustar-archive\r
+A [Index/Programs] detpax\r
+POSIX ustar archive format can not hold more than 8GiB of data and
+(very) long filenames. Forced pax usage guarantees compatibility with
+variety of OSes. GNU tar's format (also not having limitations above)
+easily could be unreadable on non-GNU systems. BASS uses
+build/contrib/detpax archiver for creating pax archives in deterministic
+bit-to-bit reproducible way.
+
+As pax/tar does not have any kind of index, as ZIP does, it is crucial
+to place the largest "bin" file at the very end of the archive. And that
+is why the outer archive is not compressed -- to easily seek among its
+entries.
+
+A [Index/Programs] Metalink4\r
+A [Index/Programs] meta4\r
+=> https://datatracker.ietf.org/doc/html/rfc5854 Metalink4\r
+XML-based format is used to keep integrity checksums for files. It is
+well supported format by various tools and it is capable of storing
+multiple checksums simultaneously. That allows us to keep both Streebog
+hashes and much more faster ones.
+
+Nothing prevents you from extending it with additional files, for
+example holding cryptographic signatures.
+
+skelpkg's name is whatever you want. As a rule it should be just skel's
+$NAME. But what if you do not care about exact skel's version and just
+want to install whatever perl (for example)? You can always just create
+a (sym)link to it with a short name.
+
+A [Index/Programs] Zstandard\r
+"bin" inner archive is compressed by default with
+=> https://facebook.github.io/zstd/ Zstandard\r
+Being much faster than venerable gzip, it achieves much better
+compression ratio. But the main issues is its ultimate decompression
+speed, where hardly your CPU will be the bottleneck. Reducing amount of
+data transfer between disks/network and you system results in
+considerable decrease in transfer/installation time. That is why so many
+package managers and distributions already moved to its usage by default.
+
+A [Index/Variables] COMPRESSOR\r
+But you can override and use any kind of compressor in the skelpkg (with
+$COMPRESSOR when using build/lib/mk-pkg). That is required for example
+for zstd skelpkg itself, that can not be decompressed without already
+having zstd installed.
--- /dev/null
+A [Index/Concepts]\r
+Most daemons and services are designed to be run under some supervisor
+program. They will be automatically restarted in case of failure. There
+will be reliable signalling ability. And flexible easy to use logging
+capabilities.
+
+=> http://cr.yp.to/daemontools.html daemontools-like\r
+solutions are advisable.
+=> https://untroubled.org/daemontools-encore/ daemontools-encore\r
+is very good option.
+=> http://smarden.org/runit/ runit\r
+=> http://www.skarnet.org/software/s6/ s6\r
+are also perfect choices.
+They are cross-platform, easy to compile and has low learning curve.
--- /dev/null
+A [Index/Concepts]\r
+Job is the slave's output of running/completed task.
+This is a directory with at least:
+* alive
+ Constantly "touch"ed file when job is running. That updates file's
+ mtime and tells that process is still alive.
+* host.txt
+ Slave's hostname. It is also used to determine when job was started.
+* tmp-path.txt
+ Path to temporary directory on the slave, in case it failed and you
+ wish to look at its state.
+* pkg.txt
+ Just list of installed skelpkgs during the build.
+* env.txt
+ Dump of environment variables used to start each step.
+* steps/
+ Subdirectory containing more subdirectories named after each step.
+ Each of those directories contains at least:
+ * started
+ Empty file used for creation time determination.
+ * stdout.txt, stderr.txt
+ => http://cr.yp.to/libtai/tai64.html TAI64N-prefixed\r
+ output of corresponding streams.
+ * exitcode.txt
+ May not exist if step is in progress. Contains either decimal
+ value of the step's return code, or "timeout" string in
+ case the step is killed due to longtime lack of output.
--- /dev/null
+Master node(s) are intended to create tasks. As a rule they are created
+as an event on someone pushing the commit. There are no specialised
+committed daemons running on them, because each project's task making
+process can vastly differ in details. Only atomic counter utilities are
+commonly used by "task makers" as a rule.
+
+Let's see how example example/goredo CI pipeline is created.
+We want to run the tests when someone pushes the commit.
+
+* Prepare necessary directories at the very beginning:
+
+ mkdir -p /nfs/revs/goredo
+ mkdir -p /nfs/tasks/ctr/0
+ mkdir -p /nfs/tasks/{cur,old,tmp}
+ mkdir /nfs/jobs
+
+* First thing to do is to create Git's post-receive hook, that will
+ touch files with the revision needed to be tested.
+
+ $ cat >goredo.git/hooks/post-receive <<EOF
+ #!/bin/sh -e
+
+ REVS=/nfs/revs/goredo
+ ZERO="0000000000000000000000000000000000000000"
+
+ read prev curr ref
+ [ "$curr" != $ZERO ] || exit 0
+ [ "$prev" != $ZERO ] || prev=$curr^
+ git rev-list $prev..$curr | while read rev ; do
+ mkdir -p $REVS/$ref
+ echo BASSing $ref/$rev... >&2
+ touch $REVS/$ref/$rev
+ done
+ EOF
+
+ After pushing a bunch of commits, corresponding empty files in
+ revisions directory will be created. Each filename is a commit's
+ hash. Those are basically a notification events about the need to
+ create corresponding tasks.
+
+* Someone has to process those events. Each project has its own
+ task-maker, because there are so many variations how code and build
+ steps can be retrieved and created. Let's create one:
+
+ #!/bin/sh -e
+
+ [ -n "$BASS_ROOT" ]
+ sname="$0" . $BASS_ROOT/lib/rc
+ [ -n "$REVS" ] || {
+ echo '"REVS"' is not set >&2
+ exit 1
+ }
+ [ -n "$PROJ" ] || {
+ echo '"PROJ"' is not set >&2
+ exit 1
+ }
+ [ -n "$STEPS" ] || {
+ echo '"STEPS"' is not set >&2
+ exit 1
+ }
+ [ -n "$ARCHS" ] || {
+ echo '"ARCHS"' is not set >&2
+ exit 1
+ }
+
+ cd $REVS
+ rev=$(find . -type f | sed -n 1p)
+ [ -n "$rev" ]
+ rev_path=$(realpath $rev)
+ rev=$(basename $rev)
+
+ task_proj=goredo
+ task_version=$(cd $PROJ ; $BASS_ROOT/master/bin/version-for-git $rev)
+ [ -n "$task_version" ]
+ task=":$task_proj:$task_version:"
+ mkdir $TASKS/tmp/$task
+ trap "rm -fr $TASKS/tmp/${task}*" HUP PIPE INT QUIT TERM EXIT
+
+ cd $STEPS
+ $BASS_ROOT/master/bin/version-for-git >$TASKS/tmp/$task/steps-version.txt
+ git rev-parse @ >$TASKS/tmp/$task/steps-revision.txt
+ # $TAR cf - --posix * | $COMPRESSOR >$TASKS/tmp/$task/steps.tar
+ git archive @ | $COMPRESSOR >$TASKS/tmp/$task/steps.tar
+
+ cd $PROJ
+ echo $task_version >$TASKS/tmp/$task/code-version.txt
+ git show --no-patch --pretty=fuller $rev >>$TASKS/tmp/$task/code-version.txt
+ echo $rev >$TASKS/tmp/$task/code-revision.txt
+ git archive $rev | $COMPRESSOR >$TASKS/tmp/$task/code.tar
+
+ tasks=$($BASS_ROOT/master/bin/clone-with-ctr $task
+ $(for arch in $ARCH ; do echo ${task}${arch} ; done))
+ [ -n "$tasks" ]
+ for t in $tasks ; do
+ echo $t
+ mv $t ../cur
+ done
+
+ rm $rev_path
+
+ * Source $BASS_ROOT/lib/rc to get all possibly useful environmental
+ variables. Expect $REVS (set by $BASS_RC sourced file) point to the
+ directory filled by post-receive hook. Expect $PROJ point to the Git
+ repository where we can read the code. Expect $STEPS point to the
+ Git repository with build steps for that project. Expect $ARCHS to
+ hold whitespace separated list of architectures to create tasks for.
+ * Take one file from $REVS directory. Then go to project's root
+ and use version-for-git to get human readable name of the commit.
+ * "task" variable holds partly created name of the future task.
+ * Create temporary directory in $TASKS/tmp.
+ * Go to $STEPS, save its Git's commit version in
+ $task/steps-revision.txt and save all its code in $task/steps.tar.
+ * Go to $PROJ and similarly save its code version and code itself.
+ A [Index/Programs] clone-with-ctr\r
+ * Go to temporary directory for tasks and call clone-with-ctr. It
+ copies your specified temporary directory to directories with the
+ architecture in their name. One directory per architecture specified
+ in $ARCHS.
+ Why not ordinary "cp -a"? It fsyncs your source directory and
+ hardlinks all files, taking virtually no additional space for each
+ of your task.
+ * At last move you fsynced tasks outside the tmp/. That way
+ they will appear atomically for processed looking at cur/.
+
+* That task-maker is expected to be run under some kind of supervisor,
+ like [CI/Daemontools].
+
+* Well, task is created, event is removed. Master finished its job.
+ Now it is time for slave to acquire one of appeared tasks.
+
+Note that you can easily create tasks on a cron events, just by touching
+files at specified time. Whatever workflow you wish!
--- /dev/null
+A [Index/Programs] notify-non-started\r
+A [Index/Programs] notify-non-taken\r
+master/bin/notify-non-started and master/bin/notify-non-taken commands
+can be used to notify you about problematic tasks.
--- /dev/null
+A [Index/Programs] reporter\r
+master/service/reporter can be run on a master node. This is
+Web-server showing you the current tasks/jobs state.
+
+A list of tasks is shown on its main page. Job's start time is
+host.txt file's creation time. Job's finished time is the last
+timestamp on "alive" file.
+
+List of steps makes columns for each task. Similarly start time,
+finished time and duration are shown. Step's colour depends on
+exitcode.txt value.
+
+Link to stdout.txt/stderr.txt files is also shown. Pay attention, that
+it contains "?tai64nlocal=1" parameters, that converts raw hexadecimal
+TAI64N timestamps to humand readable form.
--- /dev/null
+After master created some tasks on a shared filesystem, slave must take
+and execute them.
+
+ A [Index/Programs] task-taker\r
+* slave/bin/task-taker is used for that task:
+
+ $ [ -n "$BASS_ROOT" ] || BASS_ROOT=/path/to/bass BASS_RC=/path/to/rc
+ $ $BASS_ROOT/slave/bin/task-taker
+
+ It is also expected to be run under some kind of supervisor. It saves
+ lastnum file in current directory with the latest task's counter value
+ it processed.
+
+ You may run multiple task-takers to run jobs in parallel.
+
+ A [Index/Programs] job-starter\r
+* task-taker runs slave/bin/job-starter on a taken task.
+
+* Initially job-starter takes the task and checks does it have
+ appropriate architecture and probably optional hostname set. It exits
+ successfully if task is not for us. Another slave will take it
+ instead.
+
+ A [Index/Programs] slave-base\r
+* Then it creates the job $JOBS/cur/$task state. Various metainformation
+ is filled in it, like path to temporary directory, hostname and so on.
+ build/bin/mk-skelenv creates the [Build/skelenv] in that temporary
+ directory and installs slave-base package, that depends on various
+ utilities needed for running the testing steps.
+
+ A [Index/Programs] tmux\r
+* tmux executable file is created in that directory, which you can use
+ to attach to the job's tmux instance.
+
+* code.tar and steps.tar are unpacked to that directory under code/ and
+ steps/ paths.
+
+* Background heartbeat process is started, that touches $job/alive file
+ every second.
+
+ A [Index/Programs] steps-runner\r
+* Then the tmux is started in steps/ directory and runs
+ slave/bin/steps-runner.
+
+* If steps-runner succeeds, then temporary directory is removed.
+ Otherwise tmux is left running and waiting for someone to attach it
+ and press Enter. If no input happens for $FAILED_JOB_WAITTIME seconds
+ (one hour be default), it exits and removes the temporary directory.
+
+What exactly does steps-runner?
+
+* For each step, sorted lexicographically, it creates corresponding
+ output directory in job's directory with started, stdout.txt,
+ stderr.txt files.
+
+* Step is run with its stdout/stderr redirected through tai64n utility,
+ prepending the timestamps for each output line. Its exitcode it saved
+ in exitcode.txt. It is always run in the code/ directory.
+
+* Background process is also started to look for step's output progress.
+ Every second it checks if any stdout/stderr output happened for for
+ the last $LINE_TIMEOUT (ten minutes by default) seconds. If step is
+ stuck (no output for that time), then it is killed and "timeout" is
+ written to exitcode.txt.
--- /dev/null
+A [Index/Concepts]\r
+Task is the input for slaves. This is a directory with at least:
+ A [Index/Concepts] code.tar\r
+ A [Index/Concepts] code-revision.txt\r
+ A [Index/Concepts] code-version.txt\r
+* code.tar, code-revision.txt, code-version.txt
+ Compressed tarball of the code you have to test. As a rule that should
+ be some kind of "git archive" output. Text files accompany it with
+ human readable version and revision (commit hash) information.
+ A [Index/Concepts] steps.tar\r
+ A [Index/Concepts] steps-revision.txt\r
+ A [Index/Concepts] steps-version.txt\r
+* steps.tar, steps-revision.txt, steps-version.txt
+ Same as above, but archive contains the steps your slave has to
+ perform. Steps is a bunch of lexicographically ordered executable
+ files.
+
+A [Index/Concepts] name\r
+Task's directory name has several fields: NUM:PROJ:VERSION:ARCH[:HOST].
+Optional HOST can be used to force job run on a specified host. NUM is
+constantly increasing number, some kind of unique identifier of the tasks.
+
+A [Index/Variables] TASKS\r
+$TASKS directory has four subdirectories:
+* ctr/ -- atomically incrementing counter state
+* tmp/ -- temporary storage for newly creating tasks, moved to cur/ after
+* cur/ -- ready to be taken tasks
+* old/ -- archived tasks, non-taken or completed
--- /dev/null
+A [Index/Concepts]\r
+CI system consists of master and slave, joined together by shared
+filesystem. Masters create tasks for slaves. Slaves take those tasks
+and create jobs with computation results.
+
+<<[CI/overview.plantuml.txt]\r
+
+[CI/Task]
+[CI/Job]
+[CI/Daemontools]
+[CI/Master]
+[CI/Slave]
+[CI/Reporter]
+[CI/Notifier]
--- /dev/null
+A [Index/Concepts]\r
+There is a discussion maillist available at
+=> http://lists.cypherpunks.su/bass.html\r
+
+Official website is
+=> http://www.bass.cypherpunks.su/\r
--- /dev/null
+Look at build [Build/Requirements]. Some of those programs you can build
+with the help of contrib/prepare-deps scripts. Both master and slave
+nodes most likely will require [CI/Daemontools]-like solution also, but
+you should be able to build it by BASS build system itself and install
+using it in [Build/skelenv].
+
+Currently a heavy work in progress, especially related to CI-part of the
+project (package building/management is pretty steady now).
+
+ $ git clone git://git.cypherpunks.su/bass.git
+ $ cd bass/contrib/prepare-deps
+ $ ./dl
+ $ ./do
+ $ PATH="$(realpath local/bin):$(realpath local/go/bin):$PATH"
+ $ cd ../../build
+ $ echo SKELBINS=/tmp/skelbins >rc
+ $ export BASS_RC=$(realpath rc)
+ $ pkg/mk-arch
+ $ redo pkg/FreeBSD-amd64-13.2-RELEASE/shells/zsh-5.9
+ $ cd /tmp
+ $ mkdir myskelenv
+ $ cd myskelenv
+ $ /path/to/bass/build/bin/mk-skelenv
+ $ /path/to/bass/build/bin/pkg-inst shells/zsh-5.9
+ $ . ./rc
+ $ zsh
+
+You can also use
+anongit@master.git.stargrave.org:cypherpunks.su/bass.git
+anongit@slave.git.stargrave.org:cypherpunks.su/bass.git
+anongit@master.git.cypherpunks.su:cypherpunks.su/bass.git
+anongit@slave.git.cypherpunks.su:cypherpunks.su/bass.git
+git://git.stargrave.org/bass.git
+git://y.git.stargrave.org/bass.git
+git://y.git.cypherpunks.su/bass.git URLs instead.
--- /dev/null
+do-backs\r
--- /dev/null
+do-backs\r
--- /dev/null
+do-backs\r
-@node Overview
-@unnumbered Overview
-
-@cindex machine roles
+A [Index/Concepts] machine-roles\r
BASS ecosystem has at least three separate roles for the involved
machines:
+* *build* machine(s)
+* *master*(s)
+* bunch of *slave*s
-@itemize
-@item @strong{build} machine(s)
-@item @strong{master}(s)
-@item bunch of @strong{slave}s
-@end itemize
-
-@cindex NFS
-@cindex shared filesystem
+A [Index/Concepts] NFS\r
+A [Index/Concepts] shared-filesystem\r
All of them use shared filesystem(s). There is no any kind of network
protocol API between the nodes. No explicit queue manager, lock manager
or anything like that. Just a shared filesystem. You can run all of that
on single computer with single generic filesystem. You can share some
-parts of it via @command{nullfs} with isolated slave jails. You can run
-them on remote machines connected with NFS. SMB and CephFS are
-POSIX-compatible enough and also have @command{mkdir} operation atomic,
-which is the only requirement here.
+parts of it via nullfs with isolated slave jails. You can run them on
+remote machines connected with NFS. SMB and CephFS are POSIX-compatible
+enough and also have mkdir operation atomic, which is the only
+requirement here.
Build node prepares binary packages and saves them on shared directory,
making them available to slave nodes. Master node creates task in shared
Job's results are saved in shared directory, which is browseable on
master node.
-@verbatiminclude overview.plantuml.txt
+<<[overview.plantuml.txt]\r
--- /dev/null
+Why not
+=> https://www.buildbot.net/ BuildBot\r
+=> https://www.jenkins.io/ Jenkins\r
+=> https://www.travis-ci.com/ TravisCI\r
+or similar solutions? They are aimed to be run on large installations,
+where highly isolated untrusted code is run. No developers are expected
+to be able to login on their slave nodes for debugging in case of tests
+failure. But if you want a small installation, where you can easily
+login everywhere, where only Unix-like systems are involved, all those
+CI solutions are a burden to install and quickly use.
+
+Moreover all of them require bloated JavaScript-driven Web-browser.
+BuildBot is some kind of an exception, being much more simpler. Its
+early versions worked without JS-poisoned Web-interface. But try to
+install Python software with *source* dependencies from PyPI -- you will
+be excited to see dependency of very basic packages on Rust.
+
+What is the problem with Rust? There is no official way to bootstrap it,
+except for downloading and blind execution of some binaries for your
+platform.
+
+None of them has any kind of package management. Actually does not have
+to, but you are expected to manually install required software on each
+slave in that case. Docker could help there, but it supports only
+GNU/Linux.
+
+But what portable package manager choices are available, supporting
+multiple completely different operating systems?
+=> https://nixos.org/ Nix\r
+supports only GNU/Linux for a long time (however initially it had
+support of FreeBSD at least). The only cross-platform well-known choice is
+=> https://www.pkgsrc.org/ NetBSD's pkgsrc\r
+But, unlike Nix, being the classical installation system, it won't be
+able to install multiple version of the same package painlessly or work
+in completely isolated temporary directory of the CI's build job. And
+both of them, especially Nix, have considerable learning curve.
+++ /dev/null
-@node Build distfiles
-@cindex distfiles
-@section Distfiles
-
-Build process @strong{must} not depend on the Internet access. You
-must be able to preliminary download all necessary source code needed
-for the skel. And as in most package build systems, that retrievable
-source code is called "distfile".
-
-There are no requirements how you will get it. But in most cases that is
-either downloadable tarball or archive created from specific VCS commit.
-Modern DVCS'es commit is already self integrity checked tree of files.
-But tarball is just a bunch of bytes. You have to provide the URL where
-to get it, and checksum(s) to verify against, to be sure that it is not
-tampered or altered somehow. Moreover many distribution sites also
-provide a detached cryptographic signature of the tarball, which you can
-verify against some know author's public key.
-
-That is why @url{https://datatracker.ietf.org/doc/html/rfc5854, Metalink4}
-files are used to get tarballs. They also tend to contain checksum hash
-present on the download website.
-
-@pindex build/distfiles/pack
-You can download all distfiles by invoking @command{redo distfiles/all}
-target. An archive with all of them can be created with
-@command{distfiles/pack >distfiles.tar} command.
-
-@file{.meta4} files can be processed by either of three programs:
-
-@table @env
-
-@item $META4_FETCHER=meta4ra-check
-Use @command{meta4ra-check -dl 0} command to download the first URL.
-This is by default, because @command{meta4ra} utilities are anyway
-installed already. It won't try to download other URLs, unlike other
-fetch options there!
-
-@item $META4_FETCHER=wget
-Use @command{wget} compiled with @option{--with-metalink} option. The
-only drawback is that most OS distributions contain Wget without that
-(@option{--input-metalink}) option.
-
-@item $META4_FETCHER=aria2c
-Use @url{http://aria2.github.io/, Aria2}. Unfortunately it sometimes
-fails to deal with links on GitHub.com.
-
-@end table
+++ /dev/null
-@node skelpkg hooks
-@cindex skelpkg hooks
-@section Hooks
-
-Generally installation of a skelpkg is just an unpacking of the
-@file{bin} archive to skelbins directory and creating a symbolic links
-to files inside it. But there is ability to run "pre install"
-(@code{preinst}), "post install" (@code{postinst}), "pre remove"
-(@code{prerm}) and "post remove" (@code{postrm}) hooks.
-
-Hook is a directory with at least one executable file. All executable
-files in that directory are called in a lexicographical order. Each hook
-is placed in @code{$NAME-$hsh/skelpkg/$NAME-$hsh/hooks/$hook} directory.
-
-Hook is executed inside the directory we performing skelpkg
-installation, directory with the @file{local/} subdirectory.
-It expects to get following environmental variables:
-
-@table @env
-@item $DST
- Path to directory where we perform installation of the skelpkg.
-@item $PKG
- Name of the skelpkg user entered. As a rule it is more-or-less human
- readable name without any hashes.
-@item $NAMENHASH
- @env{$NAME-$hsh} name of the package.
-@item $BASS_ROOT, $BASS_RC, ...
-@end table
-
-@cindex preinst example
-One of the frequent uses of @code{preinst} hook is installation of
-runtime dependencies. For example cURL depends on OpenSSL, so let's see
-its hook:
-
-@example
-$ tar xfO $SKELPKGS/$ARCH/curl-8.6.0 name | read namenhash
-
-$ tar xfO $SKELPKGS/$ARCH/curl-8.6.0 bin |
- tar tf - $namenhash/skelpkg/$namenhash/hooks/preinst
-$namenhash/skelpkg/$namenhash/hooks/preinst/010-rdeps
-
-$ tar xfO $SKELPKGS/$ARCH/curl-8.6.0 bin |
- tar xfO - $namenhash/skelpkg/$namenhash/hooks/preinst/010-rdeps
-#!/bin/sh -e
-exec "$BASS_ROOT"/build/bin/pkg-inst openssl-1.1.1w
-@end example
-
-@cindex postinst example
-@code{postinst} hook can be used to alter @env{$DST}'s @file{rc} file,
-like @command{pkgconf} skelpkg does:
-
-@example
-$ tar xfO $SKELPKGS/$ARCH/pkgconf-2.1.1 name | read namenhash
-$ tar xfO $SKELPKGS/$ARCH/pkgconf-2.1.1 bin |
- tar xfO - $namenhash/skelpkg/$namenhash/hooks/postinst/01rc-add
-#!/bin/sh -e
-_localpath="$(realpath local)"
-cat >>rc <<EOF
-PKG_CONFIG_PATH="$_localpath/lib/pkgconfig:\$PKG_CONFIG_PATH"
-PKG_CONFIG_PATH="$_localpath/libdata/pkgconfig:\$PKG_CONFIG_PATH"
-export PKG_CONFIG_PATH
-EOF
-@end example
+++ /dev/null
-@node Requirements
-@cindex build requirements
-@section Requirements
-
-Build system, except for basic commands like @command{cc},
-(POSIX) @command{make}, requires at least:
-
-@table @asis
-
-@cindex bsdtar
-@item @command{bsdtar}
-GNU tar, available by default on GNU OSes, brings many complications in
-build-related scripts, because it is unable to decompress stdin data on
-the fly. Many GNU/Linux distributions have @command{libarchive-tools}
-package, containing libarchive-based @command{bsdtar} utility, that
-perfectly deals with any compressed archive transparently.
-
-@cindex meta4ra
-@item @url{http://www.meta4ra.stargrave.org/, meta4ra}
-Utilities for making and checking @file{.meta4} files. They are just a
-wrapper over XML and external hasher commands interoperation. They also
-can be used for downloading.
-
-@item @url{https://www.perl.org/, Perl}
-Shell scripts are hard to write in a portable way. For example there is
-no way to know file's size using POSIX-compatible utilities solely,
-except for feeding through @command{wc -c}. There is no way to get
-file's @code{mtime}, as @command{stat}'s options are completely
-different on BSD and GNU systems. Only small subset of features is
-common among @command{sed}, @command{awk}, @command{grep} utilities.
-There is no reliable portable way of using @command{sed -i} for example.
-
-That is why the only sane option is Perl in most cases. Its interpreter
-is minimalistic enough and tend to be included even in OpenWRT
-distributions. It behaves the same way on all widespread OSes (no zoo of
-loosely compatible dialects).
-
-@cindex redo
-@cindex goredo
-@item @url{http://cr.yp.to/redo.html, redo} build system
-@command{redo} started to be used in the project due to its built-in
-ability of using locks to prevent concurrent building of the same
-target. skelbins have to be installed to permanent paths, so concurrent
-builds will ruin them. @command{redo} contains atomic reliable writes of
-the target's result. And by definition it has dependency tracking of the
-targets. All of that greatly reduces the skel's code size. And as an
-unexpected feature you get ability to parallelise your builds.
-
-@url{http://www.goredo.cypherpunks.su/, goredo} is recommended
-implementation, being very fast and having largest integration test suite.
-
-@cindex setlock
-@cindex lockf
-@cindex flock
-@item Any of @command{setlock}, @command{lockf}, @command{flock}
-@command{setlock} from @command{daemontools}, or
-@command{lockf} from BSD, or
-@command{flock} from GNU.
-
-@item @url{https://go.dev/, Go}
-At least for building supplementary utilities. And, obviously, if you
-build Go-related software. Actually Go-written utilities can be replaced
-and no Go dependency will be required at all.
-
-@item FreeBSD's @command{fetch}, or @url{https://www.gnu.org/software/wget/, GNU Wget}, or @url{https://curl.se/, cURL}
-Although @command{meta4ra} can be used instead all of them.
-
-@end table
+++ /dev/null
-@node skelbin
-@cindex skelbin
-@section skelbin
-
-Built skel is called "skelbin". Each skelbin is placed in its own
-directory, containing nothing more than just that installed skelbin. In
-most cases that is done trivially by specifying
-@option{--prefix=$SKELBINS/$NAME-$hsh} in the skel. @option{$NAME} is
-the skel's name, package's name, something like @file{perl-5.32.1}.
-
-@cindex namenhash
-@option{$hsh} is a supplementary hash value used to distinguish
-different builds/revisions of the same package. Currently it is just a
-hash of the skel itself and BASS'es current commit revision. It is
-URL-safe Base64 encoded string. So for example if @env{$SKELBINS} is
-@file{/tmp/skelbins} directory, then that Perl skelbin is installed to:
-@file{/tmp/skelbins/perl-5.32.1-zP3IpCa_XY7pGHCNYQxp_1KjQQNCyUl84LqSrWLErjA}.
-@var{$NAME-$hsh} is often called "namenhash" in the code.
-
-@cindex GNU Stow
-But that is just a single software skelbin directory. What if my another
-skel requires multiple other skelbins, depends on them?
-@url{https://www.gnu.org/software/stow/, GNU Stow} helps there. That is
-simple symbolic links manager. Assume you have got
-@file{/tmp/skelbins/perl5-$hsh0} and
-@file{/tmp/skelbins/gmake-4.4-$hsh1} skelbins and you current working
-directory is @file{/tmp/tmp.whatever}. @command{stow} is used to create
-symlinks from dependant skelbins to our current's @file{local/}
-subdirectory that way:
-
-@example
-/tmp/tmp.whatever/local/bin/gmake -> /tmp/skelbins/gmake-4.4-$hsh1/bin/gmake
-/tmp/tmp.whatever/local/bin/perl5 -> /tmp/skelbins/perl5-$hsh0/bin/perl5
-/tmp/tmp.whatever/local/lib/site_perl -> /tmp/skelbins/perl5-$hsh0/lib/site_perl
-/tmp/tmp.whatever/local/share/info -> /tmp/skelbins/gmake-4.4-$hsh1/share/info
-[...]
-@end example
-
-If you add @file{$tmp/local/bin} to your @env{$PATH} and
-@file{$tmp/local/lib} to @env{$LD_LIBRARY_PATH}, then both
-@command{gmake} and @command{perl} will be available to that local build
-and work perfectly. Alter @env{$CFLAGS}, @env{$LDFLAGS},
-@env{$PKG_CONFIG_PATH} and in most cases the whole building environment
-will be aware about those skelbins.
+++ /dev/null
-@node skelenv
-@cindex skelenv
-@section skelenv
-
-skelpkgs are installed in so-called "skelenv" (skel-environment).
-skelenv is a directory with at least @file{local/} directory.
-
-@pindex build/bin/pkg-inst
-With @command{pkg-inst $PKG} command you can "install" skelpkgs to
-that skelenv. Installation procedure checks if skelbin (unpacked
-skelbin) already exists in @env{$SKELBINS}. Unpacks skelpkg if it is
-not. Then is runs @code{preinst} hook, @command{stow}, @code{postinst}
-hook.
-
-For @command{pkg-inst} usage, @file{skelpkgs/$PKG} subdirectory is
-created in skelenv, with:
-
-@itemize
-@item @file{lock} -- used by @command{pkg-inst} and @command{pkg-rm}
- commands themselves.
-@item @file{namenhash} -- @env{$NAME-$hsh}
-@item @file{preinst.done}, @file{postinst.done}, @file{prerm.done},
- @file{postrm.done} -- to track if corresponding hooks were
- successfully executed (if they exist)
-@end itemize
-
-Only one version of the skelpkg can be installed. You can not use two
-skelpkgs with varying hashes inside them.
-
-Some skelpkgs tend to create and alter @file{rc} file in the root of
-skelenv. If is aimed to be sourced by your shell, to modify various
-environment variables for ease of skelenv usage.
-
-@pindex build/bin/pkg-rm
-@command{pkg-rm $PKG} command can be used to remove skelpkgs from your
-skelenv. Basically it is just un-stow-ing of them and removing the
-@file{skelpkgs/$PKG}, with consideration of the @code{*rm} hooks.
-
-@pindex build/bin/mk-skelenv
-@command{mk-skelenv} can be used to create skelenv in current directory.
-It creates @file{local/} directory and installs @code{rc-paths} and
-@code{stow} packages.
+++ /dev/null
-@node skelpkg
-@cindex skelpkg
-@section skelpkg
-
-skelbins are not appropriate to be distributable as is, as a directories
-with bunch of files. That could be fragile due to network filesystem
-limitations. That is slow, because some skelbins already contains tens
-of thousands of files. And additional metadata has to be supplied with
-the skelbin. Your build steps are not aware about the exact
-@option{$hsh} values of the package and it would be insane to hardcode
-and repeatedly update after each BASS/skel's change. And skelbin can
-depend on another skelbin to work (runtime dependency).
-
-@cindex skelpkg format
-That is why, we have to use some kind of distribution format for solving
-the issues above. "skelpkg" is a packed skelbin with additional
-metadata. Similarly to Arch Linux and
-@url{https://www.gentoo.org/glep/glep-0078.html, Gentoo}, skelpkg is a
-single file, uncompressed POSIX pax archive with following entries:
-
-@table @file
-
- @item name, name.meta4
- Full name of the skelbin directory, @file{$NAME-$hsh}.
- With an optional checksum file.
-
- @item buildinfo, buildinfo.meta4
- Just a textual information how that skelbin/skelpkg was built.
- Currently just a current BASS'es commit revision.
-
- @item bin.meta4, bin
- Compressed POSIX pax archive containing the skelbin
- (@file{$NAME-$hsh/} directory hierarchy).
-
-@end table
-
-@cindex pax archive
-@cindex ustar archive
-@pindex detpax
-POSIX ustar archive format can not hold more than 8GiB of data and (very)
-long filenames. Forced pax usage guarantees compatibility with variety
-of OSes. GNU tar's format (also not having limitations above) easily
-could be unreadable on non-GNU systems. BASS uses
-@command{build/contrib/detpax} archiver for creating pax archives in
-deterministic bit-to-bit reproducible way.
-
-As pax/tar does not have any kind of index, as ZIP does, it is crucial
-to place the largest @file{bin} file at the very end of the archive. And
-that is why the outer archive is not compressed -- to easily seek among
-its entries.
-
-@cindex Metalink4
-@cindex .meta4
-Metalink4 (@url{https://datatracker.ietf.org/doc/html/rfc5854, RFC 5854})
-XML-based format is used to keep integrity checksums for files. It is
-well supported format by various tools and it is capable of storing
-multiple checksums simultaneously. That allows us to keep both Streebog
-hashes and much more faster ones.
-
-Nothing prevents you from extending it with additional files, for
-example holding cryptographic signatures.
-
-skelpkg's name is whatever you want. As a rule it should be just skel's
-@option{$NAME}. But what if you do not care about exact skel's version
-and just want to install whatever @command{perl} (for example)? You can
-always just create a (sym)link to it with a short name.
-
-@cindex Zstandard
-@file{bin} inner archive is compressed by default with
-@url{https://facebook.github.io/zstd/, Zstandard}. Being much faster
-than venerable @command{gzip}, it achieves much better compression
-ratio. But the main issues is its ultimate decompression speed, where
-hardly your CPU will be the bottleneck. Reducing amount of data transfer
-between disks/network and you system results in considerable decrease in
-transfer/installation time. That is why so many package managers and
-distributions already moved to its usage by default.
-
-@vindex COMPRESSOR
-But you can override and use any kind of compressor in the skelpkg (with
-@env{$COMPRESSOR} when using @command{build/lib/mk-pkg}). That is
-required for example for @command{zstd} skelpkg itself, that can not be
-decompressed without already having @command{zstd} installed.
+++ /dev/null
-@node Build tutorial
-@section Tutorial
-
-One of the most trivial and simple skel of hello world program can be
-made with the following skel in @file{skel/hw.do}:
-
-@example
-[ -n "$BASS_ROOT" ] || BASS_ROOT="$(dirname "$(realpath -- "$0")")"/../../..
-sname=$1.do . "$BASS_ROOT"/lib/rc
-. "$BASS_ROOT"/build/skel/common.rc
-
-mkdir -p "$SKELBINS"/$ARCH/$NAME/bin
-cd "$SKELBINS"/$ARCH
-cp ~/src/misc/hw/hw.pl $NAME/bin
-"$BASS_ROOT"/build/lib/mk-pkg $NAME
-@end example
-
-But let's write a skel and build a skelpkg for convenient
-@url{https://www.gnu.org/software/parallel/, GNU parallel} utility.
-
-@enumerate
-
-@vindex BASS_RC
-@item Go to @file{build/} subdirectory, and create configuration file. I
-tend to call it @file{rc}. Set @env{$BASS_RC} environment variable with
-the path to it:
-
-@example
-$ cd build/
-$ cat >rc <<EOF
-MAKE_JOBS=8
-SKELBINS=/tmp/skelbins
-EOF
-$ export BASS_RC=`realpath rc`
-@end example
-
-@vindex ARCH
-@vindex SKELBINS
-@vindex BASS_REV
-You can look for variables you can set in @file{lib/rc}. One of the most
-important variable is @env{$ARCH}, which sets what architecture you are
-using. If you have got non-Git capable checkout, then probably you
-should also set @env{$BASS_REV} to some dummy string. When it is changed
--- your skelbin's hashes too. @env{$SKELBINS} path is crucial to be the
-same as it will be on the slaves.
-
-@item Prepare distfile download rule. @file{distfiles/} directory
-contains @file{default.do} target, so it will be executed by default for
-every target in it (unless that target has its own @file{.do} file). And
-by default that target will download file based on corresponding
-@file{.meta4} file nearby.
-
-According to @command{parallel}'s homepage, it is advised to use GNU
-mirrors for downloading, so let's take its latest release with the
-signature:
-
-@example
-$ wget https://ftpmirror.gnu.org/parallel/parallel-20240122.tar.bz2
-$ wget https://ftpmirror.gnu.org/parallel/parallel-20240122.tar.bz2.sig
-@end example
-
-Its @file{.sig} file contains non-signature related commentary, that we
-should strip off:
-
-@example
-$ perl -i -ne 'print if /BEGIN/../END/' parallel-20240122.tar.bz2.sig
-@end example
-
-Many software provides signatures in binary format, that could be easily
-converted with @code{gpg --enarmor <....sig >....asc}.
-
-@pindex meta4ra-create
-Then we must create corresponding Metalink4 file, which includes
-signature, URL(s) and checksums. I will use
-@command{meta4ra-create} utility for that purpose:
-
-@example
-$ meta4ra-create \
- -fn parallel-20240122.tar.bz2 \
- -sig-pgp parallel-20240122.tar.bz2.sig \
- https://ftpmirror.gnu.org/parallel/parallel-20240122.tar.bz2 \
- <parallel-20240122.tar.bz2 >parallel-20240122.tar.bz2.meta4
-@end example
-
-@cindex skel example
-@item Write the skel file @file{skel/sysutils/parallel-20240122.do} itself:
-
-@example
-[ -n "$BASS_ROOT" ] || BASS_ROOT="$(dirname "$(realpath -- "$0")")"/../../../..
-sname=$1.do . "$BASS_ROOT"/lib/rc
-. "$BASS_ROOT"/build/skel/common.rc
-
-bdeps="rc-paths stow archivers/zstd devel/gmake-4.4.1"
-rdeps="lang/perl-5.32.1"
-redo-ifchange $bdeps "$DISTFILES"/$name.tar.bz2 $rdeps
-hsh=$("$BASS_ROOT"/build/bin/cksum $BASS_REV $spath)
-. "$BASS_ROOT"/build/lib/create-tmp-for-build.rc
-"$BASS_ROOT"/build/bin/pkg-inst $bdeps $rdeps
-. ./rc
-$TAR xf "$DISTFILES"/$name.tar.bz2
-"$BASS_ROOT"/bin/rm-r "$SKELBINS"/$ARCH/$NAME-$hsh
-
-cd $NAME
-./configure --prefix="$SKELBINS"/$ARCH/$NAME-$hsh --disable-documentation >&2
-perl -i -ne 'print unless /^\s+citation_notice..;$/' src/parallel
-gmake -j$MAKE_JOBS >&2
-gmake install >&2
-
-cd "$SKELBINS"/$ARCH
-"$LIB"/prepare-preinst-010-rdeps $NAME-$hsh $rdeps
-mkdir -p $NAME-$hsh/skelpkg/$NAME-$hsh/hooks/postinst
-cat >$NAME-$hsh/skelpkg/$NAME-$hsh/hooks/postinst/01will-cite <<EOF
-#!/bin/sh
-echo yeah, yeah, will cite >&2
-EOF
-chmod +x $NAME-$hsh/skelpkg/$NAME-$hsh/hooks/postinst/01will-cite
-"$BASS_ROOT"/build/lib/mk-pkg $NAME-$hsh
-@end example
-
-@pindex build/pkg/mk-arch
-@item Create a link to it in skelpkgs's directory for the given
-architecture. You can use @file{pkg/mk-arch} to conveniently create
-@env{$ARCH} directory and link all missing skels to it:
-
-@example
-$ pkg/mk-arch
-@end example
-
-@item Run the skelpkg creation job itself:
-
-@example
-$ redo pkg/FreeBSD-amd64-13.2-RELEASE/sysutils/parallel-20240122
-@end example
-
-@item Check and confirm that created file looks like a skelpkg:
-
-@example
-% tar xfO pkg/FreeBSD-amd64-13.2-RELEASE/sysutils/parallel-20240122 bin | tar tf -
-parallel-20240122-xhVYojyMWD8XeHTuTe44q1NyHI2b_l5fKsopunYFzkc/
-parallel-20240122-xhVYojyMWD8XeHTuTe44q1NyHI2b_l5fKsopunYFzkc/bin/
-parallel-20240122-xhVYojyMWD8XeHTuTe44q1NyHI2b_l5fKsopunYFzkc/bin/env_parallel
-parallel-20240122-xhVYojyMWD8XeHTuTe44q1NyHI2b_l5fKsopunYFzkc/bin/[...]
-parallel-20240122-xhVYojyMWD8XeHTuTe44q1NyHI2b_l5fKsopunYFzkc/bin/parallel
-@end example
-
-@end enumerate
-
-Let's describe what is happening in the skel:
-
-@itemize
-
-@item
- As @file{.do} file is not executable and does not have shebang, in
- most popular @command{redo} implementations it will be started with
- the @command{/bin/sh -e} command. So it is POSIX shell script file.
- But you are free to use any interpreted language or even build and
- even compile @file{.do} file itself with another @file{.do}.
-
-@vindex BASS_ROOT
-@item
- Nearly all BASS scripts and programs require you to set
- @env{$BASS_ROOT} (path to the root directory of the BASS project
- (@file{build/}, @file{master/}, @file{slave/})) and @env{$BASS_RC},
- which was already set by you before. @env{$BASS_ROOT} generally is
- set by invoking script itself, considering its own path in BASS'es
- hierarchy.
-
- Line with @env{$BASS_ROOT} setting just can be copy-pasted among all
- skels.
-
-@pindex common.rc
-@item
- @file{common.rc} checks if we are running under @file{pkg/}
- directory, not @file{skel/} one. If already prebuilt target result
- exists in @file{pkg/.../prebuilt/$PKG}, then it is hardlinked as a
- result. Be @strong{aware} that it also changes current working
- directory to @env{$SKELPKGS}, so it can depend on @file{subdir/pkg}
- packages.
-
-@vindex sname
-@vindex lib/rc
-@item
- Nearly all BASS scripts and programs also assume that you will
- source its @file{$BASS_ROOT/lib/rc} file, which sets various common
- variables. And it expects you to pass @env{$sname} variable with
- current scripts name.
-
- It will check if @env{$BASS_RC} is specified and set:
-
- @table @env
-
- @vindex NAME
- @item $NAME
- Base name of the script/skel, without @file{.do} extension.
-
- @vindex SPATH
- @item $SPATH
- Full path to the invoking script itself.
-
- @vindex ARCH
- @item $ARCH
- Current machine's architecture, what it builds for.
-
- @vindex SKELBINS
- @item $SKELBINS
- Path to directory with unpacked skelbins.
-
- @vindex SKELPKGS
- @item $SKELPKGS
- Path to directory containing built @file{$ARCH/$SKELPKG} skelpkgs.
-
- @vindex MAKE_JOBS
- @item $MAKE_JOBS
- Number of Make's parallel jobs. Cane be passed to @command{make}.
-
- @vindex DISTFILES
- @item $DISTFILES
- Path to @file{$BASS_ROOT/build/distfiles} directory.
-
- @item $BASS_REV
- Current BASS'es source code revision.
-
- @vindex SETLOCK
- @vindex META4RA_HASHES
- @vindex FSYNC
- @vindex TAR
- @vindex TMPDIR
- @item $SETLOCK, $META4RA_HASHES, $FSYNC, $TAR, $TMPDIR
-
- @end table
-
- And of course they could be overridden in most cases with your
- @env{$BASS_RC}.
-
-@item
- @env{$bdeps} and @env{$rdeps} are just a convenient variables not to
- repeat them multiple in the whole script. @command{parallel}
- requires Perl during build and runtime. I called it "runtime
- dependency". Actually it builds perfectly with POSIX/BSD
- @command{make}, but as an exercise we assume that it builds only
- with GNU make, so we also remember that is is "build dependency".
-
- Nearly every skel requires @code{rc-paths} (see below), @code{stow}
- and @code{zstd} skelpkgs.
-
- @pindex skel/stow
- @code{stow} skelpkg is very special: it can be build without
- invoking GNU Make and dependant Perl. It also can be installed by
- @command{pkg-inst} even if no Stow is installed: it will stow self
- in @code{postinst} hook. Also, it installs @code{perl} skelpkg only
- if it exists, so that way it works on a clean build system.
-
- @pindex skel/zstd
- @code{zstd} skelpkg makes @command{zstd*} compressor available, when
- invoking @command{mk-pkg} command to create the resulting skelpkg.
- Only a few skelpkgs use @command{gzip} compressor.
-
-@item
- Then we call @command{redo-ifchange} to assure that our distfile
- exists (otherwise it will be downloaded), as dependency packages
- too. Remember that @command{redo} guarantees that it will run script
- in the directory it lives? So here are dependent packages too. If
- any of them does not exist, then @command{redo} will invoke its
- building the same way as we invoked building of the
- @command{parallel} skelpkg. It also assures that @code{rc-paths},
- @code{stow} and @code{zstd} skelpkgs exist too.
-
-@pindex build/bin/cksum
-@item
- Next we compute current skelpkg's hash. Currently it is used solely
- to create a different hash if either BASS commit, or skel itself
- changes. @file{$BASS_ROOT/build/bin/cksum} utility takes any number
- of arguments, each of which is either string, or paths to file.
- @command{cksum} will hash all that information.
-
-@pindex build/lib/create-tmp-for-build.rc
-@item
- Then we source @file{$BASS_ROOT/build/lib/create-tmp-for-build.rc} helper.
- @itemize
- @item it creates and changes to temporary directory (@env{$tmp})
- @item makes a trap to remove it in case of errors and exit
- @item creates @file{local/} subdirectory
- @end itemize
-
-@pindex build/bin/pkg-inst
-@item
- Then it installs build and runtime dependencies with
- @command{$BASS_ROOT/build/bin/pkg-inst} command.
-
- Pay attention that initially it installs @code{stow} skelpkg, that
- is required virtually by any other skelpkg for working properly. So
- the order of skels is very important there.
-
-@pindex skel/rc-paths
-@item
- Then it sources @file{./rc} file there. When did it appear? Each
- skelpkg can contain @code{postinst} hooks. @code{rc-paths} skelpkg
- is used solely for its side-effect of @code{postinst} hook, that
- creates that @file{rc} file with altered
- @env{$PATH},
- @env{$MANPATH},
- @env{$INFOPATH},
- @env{$LD_LIBRARY_PATH},
- @env{$CFLAGS},
- @env{$CXXFLAGS},
- @env{$LDFLAGS}
- variables, making them aware of @file{local/} hierarchy.
-
- When we install @command{pkgconf} skelpkg, then it appends altering
- of @env{$PKG_CONFIG_PATH} variable in its @code{postinst} hook too.
-
- Now we are aware of various installed packages and specific
- environment variables.
-
-@item
- Unpack the distfile with @command{$TAR} to current temporary
- directory.
-
-@pindex build/bin/rm-r
-@item
- Remove existing skelbin if it exists. In theory, each time you make
- any modifications to your skels, you make a commit in BASS
- repository, thus changing the @env{$BASS_REV} and corresponding
- @env{$hsh} value. So each time your new skelbin build should be in
- different directory. But when you are developing your skel, no
- commits and hashes are changed. Moreover your previous build attempt
- may fail due to I/O or system error.
-
- Because of @command{redo}'s lockfiles it should be safe to remove
- existing skelbin, because noone must be using it.
-
- Why not trivial @command{rm -fr}? Because skelbins are forced to be
- read-only directories, that is why you just won't have enough
- permissions to remove them. @command{$BASS_ROOT/bin/rm-r} deals with it.
-
-@item
- Go to the unpacked directory and @command{configure} the program at
- last! Pay attention to proper installation to immutable/permanent
- path under @env{$SKELBINS/$ARCH/$NAME-$hsh}.
-
- Remember that any output to stdout is saved by @command{redo} as a
- result of the target! So do not forget to redirect messages to
- stderr or silence them at all.
-
-@item
- GNU Parallel has possibly annoying and disturbing advertisement about
- its citing. Let's patch it and remove the annoying code. You can do
- whatever your want with the code. Download a patch in distfiles and
- use it here. Keep some source code nearby in skels directory. No
- limitations.
-
-@item
- Call @command{gmake} to build it up. Because @command{gmake} skelpkg
- is installed, that command will be available under that name even on
- GNU systems. Use @env{$MAKE_JOBS} if it is appropriate and safe to
- be used. Aware that many programs can not be built in parallel build
- mode.
-
-@item
- As @command{parallel} requires Perl at runtime, we need to assure it
- is installed when our skelpkg is going to be installed. Create a
- @code{preinst} @ref{skelpkg hooks, hook} for that purpose, that will
- call @command{pkg-inst} command.
-
- @pindex build/lib/prepare-preinst-010-rdeps
- Because runtime dependencies are often used hooks, there is
- @command{prepare-preinst-010-rdeps} for that. Its arguments will be
- converted to corresponding
- @file{skelpkg/$namenhash/hooks/preinst/010-rdeps} executable file.
-
-@item
- Just as a practice, let's also create @code{postinst} hook, that
- will print our promise that we will cite that GNU Parallel. Just an
- ordinary @file{01will-cite} script. After @command{parallel-20240122}
- skelpkg installation, you will see that promise message.
-
-@pindex build/lib/mk-pkg
-@item
- And at last we are ready to create the final skelpkg from our
- existing skelbin directory. @command{$BASS_ROOT/build/lib/mk-pkg}
- takes a name of the directory you need to pack in skelpkg. It will
- automatically include necessary @file{name} and @file{buildinfo}
- files with corresponding @file{.meta4} files.
-
- @command{mk-pkg} outputs skelpkg to the stdout, that is not
- explicitly captured in that @command{redo} target, passing it
- through to the @command{redo} itself, making it the resulting
- skelpkg.
-
-@end itemize
+++ /dev/null
-@node Daemontools
-@cindex daemontools
-@section Daemontools
-
-Most daemons and services are designed to be run under some supervisor
-program. They will be automatically restarted in case of failure. There
-will be reliable signalling ability. And flexible easy to use logging
-capabilities.
-
-@url{http://cr.yp.to/daemontools.html, daemontools}-like
-solutions are advisable.
-@url{https://untroubled.org/daemontools-encore/, daemontools-encore} is
-very good option. But @url{http://smarden.org/runit/, runit} or
-@url{http://www.skarnet.org/software/s6/, s6} are also perfect choices.
-They are cross-platform, easy to compile and has low learning curve.
+++ /dev/null
-@node CI
-@cindex CI
-@unnumbered CI
-
-CI system consists of master and slave, joined together by shared
-filesystem. Masters create tasks for slaves. Slaves take those tasks and
-create jobs with computation results.
-
-@verbatiminclude ci/overview.plantuml.txt
-
-@include ci/task.texi
-@include ci/job.texi
-@include ci/daemontools.texi
-@include ci/master.texi
-@include ci/slave.texi
-@include ci/reporter.texi
-@include ci/notifier.texi
+++ /dev/null
-@node Job
-@cindex job
-@section Job
-
-Job is the slave's output of running/completed task.
-This is a directory with at least:
-
-@table @file
-
-@item alive
-Constantly @command{touch}ed file when job is running. That updates
-file's mtime and tells that process is still alive.
-
-@item host.txt
-Slave's hostname. It is also used to determine when job was started.
-
-@item tmp-path.txt
-Path to temporary directory on the slave, in case it failed and you
-wish to look at its state.
-
-@item pkg.txt
-Just list of installed skelpkgs during the build.
-
-@item env.txt
-Dump of environment variables used to start each step.
-
-@item steps/
-Subdirectory containing more subdirectories named after each step.
-Each of those directories contains at least:
-
- @table @file
-
- @item started
- Empty file used for creation time determination.
-
- @item stdout.txt, stderr.txt
- @url{http://cr.yp.to/libtai/tai64.html, TAI64N}-prefixed output
- of corresponding streams.
-
- @item exitcode.txt
- May not exist if step is in progress. Contains either decimal
- value of the step's return code, or @code{timeout} string in
- case the step is killed due to longtime lack of output.
-
- @end table
-
-@end table
+++ /dev/null
-@node Master
-@section Master
-
-Master node(s) are intended to create tasks. As a rule they are created
-as an event on someone pushing the commit. There are no specialised
-committed daemons running on them, because each project's task making
-process can vastly differ in details. Only atomic counter utilities are
-commonly used by "task makers" as a rule.
-
-Let's see how example @file{example/goredo} CI pipeline is created.
-We want to run the tests when someone pushes the commit.
-
-@itemize
-
-@item
- Prepare necessary directories at the very beginning:
-
-@example
-mkdir -p /nfs/revs/goredo
-mkdir -p /nfs/tasks/ctr/0
-mkdir -p /nfs/tasks/@{cur,old,tmp@}
-mkdir /nfs/jobs
-@end example
-
-@item
- First thing to do is to create Git's @file{post-receive} hook,
- that will touch files with the revision needed to be tested.
-
-@example
-$ cat >goredo.git/hooks/post-receive <<EOF
-#!/bin/sh -e
-
-REVS=/nfs/revs/goredo
-ZERO="0000000000000000000000000000000000000000"
-
-read prev curr ref
-[ "$curr" != $ZERO ] || exit 0
-[ "$prev" != $ZERO ] || prev=$curr^
-git rev-list $prev..$curr | while read rev ; do
- mkdir -p $REVS/$ref
- echo BASSing $ref/$rev... >&2
- touch $REVS/$ref/$rev
-done
-EOF
-@end example
-
- After pushing a bunch of commits, corresponding empty files in
- revisions directory will be created. Each filename is a commit's
- hash. Those are basically a notification events about the need to
- create corresponding tasks.
-
-@item
- Someone has to process those events. Each project has its own
- @command{task-maker}, because there are so many variations how code
- and build steps can be retrieved and created. Let's create one:
-
-@example
-#!/bin/sh -e
-
-[ -n "$BASS_ROOT" ]
-sname="$0" . $BASS_ROOT/lib/rc
-[ -n "$REVS" ] || @{
- echo '"REVS"' is not set >&2
- exit 1
-@}
-[ -n "$PROJ" ] || @{
- echo '"PROJ"' is not set >&2
- exit 1
-@}
-[ -n "$STEPS" ] || @{
- echo '"STEPS"' is not set >&2
- exit 1
-@}
-[ -n "$ARCHS" ] || @{
- echo '"ARCHS"' is not set >&2
- exit 1
-@}
-
-cd $REVS
-rev=$(find . -type f | sed -n 1p)
-[ -n "$rev" ]
-rev_path=$(realpath $rev)
-rev=$(basename $rev)
-
-task_proj=goredo
-task_version=$(cd $PROJ ; $BASS_ROOT/master/bin/version-for-git $rev)
-[ -n "$task_version" ]
-task=":$task_proj:$task_version:"
-mkdir $TASKS/tmp/$task
-trap "rm -fr $TASKS/tmp/$@{task@}*" HUP PIPE INT QUIT TERM EXIT
-
-cd $STEPS
-$BASS_ROOT/master/bin/version-for-git >$TASKS/tmp/$task/steps-version.txt
-git rev-parse @ >$TASKS/tmp/$task/steps-revision.txt
-# $TAR cf - --posix * | $COMPRESSOR >$TASKS/tmp/$task/steps.tar
-git archive @ | $COMPRESSOR >$TASKS/tmp/$task/steps.tar
-
-cd $PROJ
-echo $task_version >$TASKS/tmp/$task/code-version.txt
-git show --no-patch --pretty=fuller $rev >>$TASKS/tmp/$task/code-version.txt
-echo $rev >$TASKS/tmp/$task/code-revision.txt
-git archive $rev | $COMPRESSOR >$TASKS/tmp/$task/code.tar
-
-tasks=$($BASS_ROOT/master/bin/clone-with-ctr $task
- $(for arch in $ARCH ; do echo $@{task@}$@{arch@} ; done))
-[ -n "$tasks" ]
-for t in $tasks ; do
- echo $t
- mv $t ../cur
-done
-
-rm $rev_path
-@end example
-
- @itemize
- @item
- Source @file{$BASS_ROOT/lib/rc} to get all possibly useful
- environmental variables. Expect @env{$REVS} (set by @env{$BASS_RC}
- sourced file) point to the directory filled by @file{post-receive}
- hook. Expect @env{$PROJ} point to the Git repository where we can
- read the code. Expect @env{$STEPS} point to the Git repository with
- build steps for that project. Expect @env{$ARCHS} to hold whitespace
- separated list of architectures to create tasks for.
-
- @item
- Take one file from @file{$REVS} directory. Then go to project's root
- and use @command{version-for-git} to get human readable name of the
- commit.
-
- @item
- "task" variable holds partly created name of the future task.
-
- @item
- Create temporary directory in @file{$TASKS/tmp}.
-
- @item
- Go to @file{$STEPS}, save its Git's commit version in
- @file{$task/steps-revision.txt} and save all its code in
- @file{$task/steps.tar}.
-
- @item
- Go to @file{$PROJ} and similarly save its code version and code itself.
-
- @pindex clone-with-ctr
- @item
- Go to temporary directory for tasks and call
- @command{clone-with-ctr}. It copies your specified temporary
- directory to directories with the architecture in their name. One
- directory per architecture specified in @env{$ARCHS}.
-
- Why not ordinary @command{cp -a}? It fsyncs your source directory
- and hardlinks all files, taking virtually no additional space for
- each of your task.
-
- @item
- At last move you fsynced tasks outside the @file{tmp/}. That way
- they will appear atomically for processed looking at @file{cur/}.
- @end itemize
-
-@item
- That @command{task-maker} is expected to be run under some kind of
- supervisor, like @ref{Daemontools, daemontools}.
-
-@item
- Well, task is created, event is removed. Master finished its job.
- Now it is time for slave to acquire one of appeared tasks.
-
-@end itemize
-
-Note that you can easily create tasks on a cron events, just by touching
-files at specified time. Whatever workflow you wish!
+++ /dev/null
-@node Notifier
-@cindex Notifier
-@unnumbered Notifier
-
-@pindex master/bin/notify-non-started
-@pindex master/bin/notify-non-taken
-@command{master/bin/notify-non-started} and
-@command{master/bin/notify-non-taken} commands can be used to notify
-you about problematic tasks.
+++ /dev/null
-@node Reporter
-@cindex Reporter
-@unnumbered Reporter
-
-@pindex master/bin/reporter
-@command{master/service/reporter} can be run on a master node. This is
-Web-server showing you the current tasks/jobs state.
-
-A list of tasks is shown on its main page. Job's start time is
-@file{host.txt} file's creation time. Job's finished time is the last
-timestamp on @file{alive} file.
-
-List of steps makes columns for each task. Similarly start time,
-finished time and duration are shown. Step's colour depends on
-@file{exitcode.txt} value.
-
-Link to @file{stdout.txt}/@file{stderr.txt} files is also shown. Pay
-attention, that it contains @code{?tai64nlocal=1} parameters, that
-converts raw hexadecimal TAI64N timestamps to humand readable form.
+++ /dev/null
-@node Slave
-@section Slave
-
-After master created some tasks on a shared filesystem, slave must take
-and execute them.
-
-@itemize
-
-@item
- @pindex task-taker
- @command{slave/bin/task-taker} is used for that task:
-
-@example
-$ [ -n "$BASS_ROOT" ] || BASS_ROOT=/path/to/bass BASS_RC=/path/to/rc
-$ $BASS_ROOT/slave/bin/task-taker
-@end example
-
- It is also expected to be run under some kind of supervisor. It
- saves @file{lastnum} file in current directory with the latest
- task's counter value it processed.
-
- You may run multiple @command{task-takers} to run jobs in parallel.
-
-@item
- @command{task-taker} runs @command{slave/bin/job-starter} on a taken
- task.
-
-@item
- @pindex slave/bin/job-starter
- Initially @command{job-starter} takes the task and checks does it
- have appropriate architecture and probably optional hostname set. It
- exits successfully if task is not for us. Another slave will take it
- instead.
-
-@item
- @cindex slave-base
- Then it creates the job @file{$JOBS/cur/$task} state. Various
- metainformation is filled in it, like path to temporary directory,
- hostname and so on. @command{build/bin/mk-skelenv} creates the
- @ref{skelenv} in that temporary directory and installs
- @code{slave-base} package, that depends on various utilities needed
- for running the testing steps.
-
-@item
- @cindex tmux
- @file{tmux} executable file is created in that directory, which you
- can use to attach to the job's @command{tmux} instance.
-
-@item
- @file{code.tar} and @file{steps.tar} are unpacked to that directory
- under @file{code/} and @file{steps/} paths.
-
-@item
- Background heartbeat process is started, that touches
- @file{$job/alive} file every second.
-
-@item
- @pindex slave/bin/steps-runner
- Then the @command{tmux} is started in @file{steps/} directory and
- runs @command{slave/bin/steps-runner}.
-
-@item
- If @command{steps-runner} succeeds, then temporary directory is
- removed. Otherwise @command{tmux} is left running and waiting for
- someone to attach it and press Enter. If no input happens for
- @env{$FAILED_JOB_WAITTIME} seconds (one hour be default), it exits
- and removes the temporary directory.
-
-@end itemize
-
-What exactly does @command{steps-runner}?
-
-@itemize
-
-@item
- For each step, sorted lexicographically, it creates corresponding output
- directory in job's directory with @file{started}, @file{stdout.txt},
- @file{stderr.txt} files.
-
-@item
- Step is run with its stdout/stderr redirected through
- @command{tai64n} utility, prepending the timestamps for each output
- line. Its exitcode it saved in @file{exitcode.txt}. It is always run
- in the @file{code/} directory.
-
-@item
- Background process is also started to look for step's output
- progress. Every second it checks if any stdout/stderr output
- happened for for the last @env{$LINE_TIMEOUT} (ten minutes by
- default) seconds. If step is stuck (no output for that time), then
- it is killed and @code{timeout} is written to @file{exitcode.txt}.
-
-@end itemize
+++ /dev/null
-@node Task
-@cindex task
-@section Task
-
-Task is the input for slaves. This is a directory with at least:
-
-@table @file
-
-@cindex code.tar
-@cindex code-revision.txt
-@cindex code-version.txt
-@item code.tar, code-revision.txt, code-version.txt
-Compressed tarball of the code you have to test. As a rule that should
-be some kind of @command{git archive} output. Text files accompany it
-with human readable version and revision (commit hash) information.
-
-@cindex steps.tar
-@cindex steps-revision.txt
-@cindex steps-version.txt
-@item steps.tar, steps-revision.txt, steps-version.txt
-Same as above, but archive contains the steps your slave has to perform.
-Steps is a bunch of lexicographically ordered executable files.
-
-@end table
-
-@cindex task name
-Task's directory name has several fields:
-@code{NUM:PROJ:VERSION:ARCH[:HOST]}. Optional @code{HOST} can be used to
-force job run on a specified host. @code{NUM} is constantly increasing
-number, some kind of unique identifier of the tasks.
-
-@vindex TASKS
-@file{$TASKS} directory has four subdirectories:
-
-@table @file
-@item ctr
- Atomically incrementing counter state.
-@item tmp
- Temporary storage for newly creating tasks.
- They are moved to @file{cur/} after.
-@item cur
- Ready to be taken tasks.
-@item old
- Archived tasks, non-taken or completed.
-@end table
+++ /dev/null
-@node Tutorial
-@section Tutorial
-
-Let's create test pipeline for @command{goredo} project.
-We want to run
-the tests when someone pushes the commit.
-
-@enumerate
-
-@item
- Prepare necessary directories at the very beginning:
-
-@example
-mkdir -p /nfs/revs/goredo
-mkdir -p /nfs/tasks/ctr/0
-mkdir -p /nfs/tasks/@{cur,old,tmp@}
-mkdir /nfs/jobs
-@end example
-
-@item
- First thing to do is to create Git's @file{post-receive} hook,
- that will touch files with the revision needed to be tested.
-
-@example
-$ cat >goredo.git/hooks/post-receive <<EOF
-#!/bin/sh -e
-
-REVS=/nfs/revs/goredo
-ZERO="0000000000000000000000000000000000000000"
-
-read prev curr ref
-[ "$curr" != $ZERO ] || exit 0
-[ "$prev" != $ZERO ] || prev=$curr^
-git rev-list $prev..$curr | while read rev ; do
- mkdir -p $REVS/$ref
- echo BASSing $ref/$rev... >&2
- touch $REVS/$ref/$rev
-done
-EOF
-@end example
-
- After pushing a bunch of commits, corresponding empty files in
- revisions directory will be created. Each filename is a commit's
- hash. Those are basically a notification events about the need to
- create corresponding tasks.
-
-@item
- Someone has to process those events. Each project has its own
- @command{task-maker}, because there are so many variations how code
- and build steps can be retrieved and created. Let's create one:
-
-@example
-#!/bin/sh -e
-
-[ -n "$BASS_ROOT" ]
-sname="$0" . $BASS_ROOT/lib/rc
-[ -n "$REVS" ] || @{
- echo '"REVS"' is not set >&2
- exit 1
-@}
-[ -n "$PROJ" ] || @{
- echo '"PROJ"' is not set >&2
- exit 1
-@}
-[ -n "$STEPS" ] || @{
- echo '"STEPS"' is not set >&2
- exit 1
-@}
-[ -n "$ARCHS" ] || @{
- echo '"ARCHS"' is not set >&2
- exit 1
-@}
-
-cd $REVS
-rev=$(find . -type f | sed -n 1p)
-[ -n "$rev" ]
-rev_path=$(realpath $rev)
-rev=$(basename $rev)
-
-task_proj=goredo
-task_version=$(cd $PROJ ; $BASS_ROOT/master/bin/version-for-git $rev)
-[ -n "$task_version" ]
-task=":$task_proj:$task_version:"
-mkdir $TASKS/tmp/$task
-trap "rm -fr $TASKS/tmp/$@{task@}*" HUP PIPE INT QUIT TERM EXIT
-
-cd $STEPS
-$BASS_ROOT/master/bin/version-for-git >$TASKS/tmp/$task/steps-version.txt
-git rev-parse @ >$TASKS/tmp/$task/steps-revision.txt
-# $TAR cf - --posix * | $COMPRESSOR >$TASKS/tmp/$task/steps.tar
-git archive @ | $COMPRESSOR >$TASKS/tmp/$task/steps.tar
-
-cd $PROJ
-echo $task_version >$TASKS/tmp/$task/code-version.txt
-git show --no-patch --pretty=fuller $rev >>$TASKS/tmp/$task/code-version.txt
-echo $rev >$TASKS/tmp/$task/code-revision.txt
-git archive $rev | $COMPRESSOR >$TASKS/tmp/$task/code.tar
-
-tasks=$($BASS_ROOT/master/bin/clone-with-ctr $task
- $(for arch in $ARCH ; do echo $@{task@}$@{arch@} ; done))
-[ -n "$tasks" ]
-for t in $tasks ; do
- echo $t
- mv $t ../cur
-done
-
-rm $rev_path
-@end example
-
- @itemize
- @item
- Source @file{$BASS_ROOT/lib/rc} to get all possibly useful
- environmental variables. Expect @env{$REVS} (set by @env{$BASS_RC}
- sourced file) point to the directory filled by @file{post-receive}
- hook. Expect @env{$PROJ} point to the Git repository where we can
- read the code. Expect @env{$STEPS} point to the Git repository with
- build steps for that project. Expect @enx{$ARCHS} to hold whitespace
- separated list of architectures to create tasks for.
-
- @item
- Take one file from @file{$REVS} directory. Then go to project's root
- and use @command{version-for-git} to get human readable name of the
- commit.
-
- @item
- "task" variable holds partly created name of the future task.
-
- @item
- Create temporary directory in @file{$TASKS/tmp}.
-
- @item
- Go to @file{$STEPS}, save its Git's commit version in
- @file{$task/steps-revision.txt} and save all its code in
- @file{$task/steps.tar}.
-
- @item
- Go to @file{$PROJ} and similarly save its code version and code itself.
-
- @pindex clone-with-ctr
- @item
- Go to temporary directory for tasks and call
- @command{clone-with-ctr}. It copies your specified temporary
- directory to directories with the architecture in their name. One
- directory per architecture specified in @env{$ARCHS}.
-
- Why not ordinary @command{cp -a}? It fsyncs your source directory
- and hardlinks all files, taking virtually no additional space for
- each of your task.
-
- @item
- At last move you fsynced tasks outside the @file{tmp/}. That way
- they will appear atomically for processed looking at @file{cur/}.
- @end itemize
-
-@item
- That @command{task-maker} is expected to be run under some kind of
- supervisor, like @ref{Daemontools, daemontools}.
-
-@item
- Well, task is created, event is removed. Master finished its job.
- Now it is time for slave to acquire one of appeared tasks.
-
- @pindex task-taker
- @command{slave/bin/task-taker} is used for that task:
-
-@example
-$ BASS_ROOT=/path/to/bass BASS_RC=/path/to/rc $BASS_ROOT/slave/bin/task-taker
-@end example
-
- It is also expected to be run under some kind of supervisor. It
- saves @file{lastnum} file in current directory with the latest
- task's counter value it processed.
-
-@item
- @command{task-taker} runs @command{slave/bin/job-starter} on a task.
-
-@end enumerate
+++ /dev/null
-@node Contacts
-@cindex contacts
-@cindex maillist
-@unnumbered Contacts
-
-There is a discussion maillist available at
-@url{http://lists.cypherpunks.su/bass.html, bass}.
-
-Official website is @url{http://www.bass.cypherpunks.su/}.
--- /dev/null
+BASS -- Build Automation Steady System. It includes cross-platform
+package manager and distributed continuous integration system.
+
+ Simple as bass guitar with only a few strings, yet as powerful!
+
+Package manager is mainly intended for preparing necessary dependencies
+for continuous integration build processes. But it can be used
+completely independently.
+
+Everything is aimed to work under at least
+=> https://www.freebsd.org/ FreeBSD\r
+=> https://www.debian.org/ Debian GNU/Linux\r
+=> https://astralinux.ru/ (Astra SE particularly)\r
+There should be minimal number of required dependencies on each node of
+the system, to ease installation and have as less burden as possible.
+Just a single small text file should be enough for whole configuration
+of the node. There should be just a few executable files and available
+commands on each node. Nearly everything is written in POSIX shell.
+
+BASS is
+=> https://www.gnu.org/philosophy/pragmatic.html copylefted\r
+=> https://www.gnu.org/philosophy/free-sw.html free software\r
+licenced under
+=> https://www.gnu.org/licenses/gpl-3.0.html GNU GPLv3\r
+
+[Why?]
+[Overview]
+[INSTALL]
+[CI/]
+[Build/]
+[Contacts]
+
+[Index/Concepts]
+[Index/Programs]
+[Index/Variables]
+++ /dev/null
-\input texinfo
-@settitle BASS @value{VERSION}
-
-@copying
-Copyright @copyright{} 2024-2025 @email{stargrave@@stargrave.org, Sergey Matveev}
-@end copying
-
-@node Top
-@top BASS @value{VERSION}
-
-@quotation
-Simple as bass guitar with only a few strings, yet as powerful!
-@end quotation
-
-BASS -- Build Automation Steady System. It includes cross-platform
-package manager and distributed continuous integration system.
-
-Package manager is mainly intended for preparing necessary dependencies
-for continuous integration build processes. But it can be used
-completely independently.
-
-Everything is aimed to work under at least
-@url{https://www.freebsd.org/, FreeBSD} and
-@url{https://www.debian.org/, Debian} GNU/Linux
-(@url{https://astralinux.ru/, Astra SE} particularly, that is very
-popular distribution in Russian Federation, although being closed-source
-proprietary one) operating systems. There should be minimal number of
-required dependencies on each node of the system, to ease installation
-and have as less burden as possible. Just a single small text file
-should be enough for whole configuration of the node. There should be
-just a few executable files and available commands on each node. Nearly
-everything is written on POSIX shell.
-
-BASS is @url{https://www.gnu.org/philosophy/pragmatic.html, copylefted}
-@url{https://www.gnu.org/philosophy/free-sw.html, free software}
-licenced under @url{https://www.gnu.org/licenses/gpl-3.0.html, GNU GPLv3}.
-
-@insertcopying
-
-@include why.texi
-@include overview.texi
-@include install.texi
-@include build/index.texi
-@include ci/index.texi
-@include contacts.texi
-
-@node Indices
-@unnumbered Indices
-
-@node Concepts Index
-@section Concepts Index
-@printindex cp
-
-@node Programs Index
-@section Programs Index
-@printindex pg
-
-@node Variables Index
-@section Variables Index
-@printindex vr
-
-@bye
+++ /dev/null
-@node Install
-@unnumbered Install
-
-Look at build @ref{Requirements, requirements}. Some of those programs
-you can build with the help of @file{contrib/prepare-deps} scripts. Both
-master and slave nodes most likely will require @ref{Daemontools,
-daemontools}-like solution also, but you should be able to build it by
-BASS build system itself and install using it in @ref{skelenv}.
-
-Currently a heavy work in progress, especially related to CI-part of the
-project (package building/management is pretty steady now).
-
-@example
-$ git clone git://git.cypherpunks.su/bass.git
-$ cd bass/contrib/prepare-deps
-$ ./dl
-$ ./do
-$ PATH="$(realpath local/bin):$(realpath local/go/bin):$PATH"
-$ cd ../../build
-$ echo SKELBINS=/tmp/skelbins >rc
-$ export BASS_RC=$(realpath rc)
-$ pkg/mk-arch
-$ redo pkg/FreeBSD-amd64-13.2-RELEASE/shells/zsh-5.9
-$ cd /tmp
-$ mkdir myskelenv
-$ cd myskelenv
-$ /path/to/bass/build/bin/mk-skelenv
-$ /path/to/bass/build/bin/pkg-inst shells/zsh-5.9
-$ . ./rc
-$ zsh
-@end example
-
-You can also use
-@code{anongit@@master.git.stargrave.org:cypherpunks.su/bass.git},
-@code{anongit@@slave.git.stargrave.org:cypherpunks.su/bass.git},
-@code{anongit@@master.git.cypherpunks.su:cypherpunks.su/bass.git},
-@code{anongit@@slave.git.cypherpunks.su:cypherpunks.su/bass.git},
-@url{git://git.stargrave.org/bass.git},
-@url{git://y.git.stargrave.org/bass.git},
-@url{git://y.git.cypherpunks.su/bass.git} URLs instead.
+++ /dev/null
-@node Why?
-@unnumbered Why?
-
-Why not @url{https://www.buildbot.net/, BuildBot},
-@url{https://www.jenkins.io/, Jenkins},
-@url{https://www.travis-ci.com/, TravisCI} or similar solutions? They
-are aimed to be run on large installations, where highly isolated
-untrusted code is run. No developers are expected to be able to login on
-their slave nodes for debugging in case of tests failure. But if you
-want a small installation, where you can easily login everywhere, where
-only Unix-like systems are involved, all those CI solutions are a burden
-to install and quickly use.
-
-Moreover all of them require bloated JavaScript-driven Web-browser.
-BuildBot is some kind of an exception, being much more simpler. Its
-early versions worked without JS-poisoned Web-interface. But try to
-install Python software with @strong{source} dependencies from PyPI --
-you will be excited to see dependency of very basic packages on Rust.
-
-What is the problem with Rust? There is no official way to bootstrap it,
-except for downloading and blind execution of some binaries for your
-platform.
-
-None of them has any kind of package management. Actually does not have
-to, but you are expected to manually install required software on each
-slave in that case. Docker could help there, but it supports only
-GNU/Linux.
-
-But what portable package manager choices are available, supporting
-multiple completely different operating systems?
-@url{https://nixos.org/, Nix} supports only GNU/Linux for a long time
-(however initially it had support of FreeBSD at least). The only
-cross-platform well-known choice is NetBSD's
-@url{https://www.pkgsrc.org/, pkgsrc}. But, unlike Nix, being the
-classical installation system, it won't be able to install multiple
-version of the same package painlessly or work in completely isolated
-temporary directory of the CI's build job. And both of them, especially
-Nix, have considerable learning curve.