Crates.io | aelhometta |
lib.rs | aelhometta |
version | 1.0.16 |
source | src |
created_at | 2024-01-16 15:26:08.614296 |
updated_at | 2024-06-26 13:02:34.943683 |
description | Archaic attempt at autonomous non-sandboxed distributed artificial life of assembler automaton type. |
homepage | |
repository | https://github.com/aelhometta/aelhometta |
max_upload_size | |
id | 1101713 |
size | 538,857 |
Archaic attempt at autonomous non-sandboxed distributed artificial life of assembler automaton type, it features: separation of descriptive and executive data that provides branches and loops without jump instructions, encrypted publish-subscribe interaction with other instances over Tor, input/output through ordinary files associated with external sensors and actuators, and built-in shell.
Of course it is akin to AlChemy[FON1], [FON2] / Avida[ADA1], [OFR1] / Coreworld[RAS1] / Stringmol[HIC1] / (Network) Tierra[RAY1], [RAY3] / ... / biolife and it, as a project and a concept, may collapse or branch sooner rather than later due to participation of few devices you have certain control over, which are periodically online and which are optionally connected, on the one hand, to microphones, cameras, thermometers, receivers, dosimeters etc. (inputs) and, on the other hand, to speakers, monitors, conditioners, transmitters, control rods and so on (outputs). However, an instance can run completely isolated in memory of an offline device with no access to outside world, or switch between offline and online modes.
By now you probably know the big shining elusive goals of such enterprises better than us, — open-endedness, Cambrian explosion, blah-blah-blah, — the problem is, what if they are incompatible with safety? What if necessary (though in no way sufficient) condition is to allow the interaction of the artificial environment at hand with the real world beyond sandboxing threshold? Are we, shaped by evolution in this world, able to recognise open-endedness if it has not been moulded by the forces of the same world, when some of them do not have even names? If it kills, it will be killed... or rather less adapted variations will be, but more adapted ones will survive.
If so, then we ought to choose: open-endedness XOR safety. On the other hand, our precious safety may follow from... experience, simply: decades of researches, volumes of reflections, but — without exceptions, since we are still here... yet — in the end, a fizzle. As noted twenty years ago,
“All of this was impressive work, and it pointed the way forward to a consolidation of what these imaginative individuals had done. But the consolidation never happened. At each conference I went to, the larger group of people involved all seemed to want to do things from scratch, in their own way. Each had his or her own way of setting up the issues. There was not nearly enough work that built on the promising beginnings of Ray and others. The field never made a transition into anything resembling normal science. And it has now ground to a halt.”[GOD1]
CONTENTS
Nodes, kind of memory units — contain elementary instructions, have opaque addresses, join into chains via pointers to next nodes.
Controllers, kind of CPUs — chosen randomly at each tick, move along chains of nodes, execute instructions changing their local states and the global state.
No "pointer arithmetic" and thus free "write protection" due to opacity of a node address. Bye-bye brittleness? Hello rigidness! Also, less Euclidicity of a "space" ælhometta inhabits: it is neither 1D, nor any nD, connectivity is poor, things do not "move" across short or long distances (in fact, there is no metric).
Separation of descriptive ("schemes" akin to chromosomes) and executive ("constructors" akin to ribosomes) chains in ancestral entity: one chain describes the scheme and another chain realizes it. This is probably the main deviation from "traditional", in these parts of artificial life world, approach, where an "organism" usually scans and reproduces its own code... although it resembles (hyper)parasites that had evolved in Tierra[RAY1], and, of course, the original approach of von Neumann has this separation[NEU1].
(Such separation provides) non-linearity of execution flow without jump instructions, neither by address, nor by template; instead, each node has 2 pointers, one to the main next node, and another to the alternative next node. The choice is made by the controller accordingly to certain flag, but both routes are defined by the scheme from which the executed chain has been constructed.
Based on the original chain, a new chain can be replicated (linearly copied verbatim) or constructed (non-linearly built taking into account special "construction" instructions).
"Mortality" via ring buffers and dangling pointers: when the maximum number of nodes or controllers has been allocated, the newest ones replace the oldest ones, and when a controller moves to non-existent node following the pointer of the previous node, this controller ceases to exist.
2 globally accessible arrays — of nodes' opaque addresses and of integers — for communication between controllers. The former contains not all such addresses, but only those transmitted by controllers. The addressing is linear here, thus Euclidicity strikes back.
Encrypted interaction with other instances over Internet, specifically over Tor, following the publish-subscribe pattern of ZeroMQ. The data being exchanged is an array of 64-bit little endian integers, at least at the level Ælhometta provides; how ælhomettas interpret that data is meaningless question until they reach certain level of complexity.
Input and output to connect with "real world" by means of ordinary files containing 64-bit little endian integers as well. Missing link here is a bunch of external applications connecting files themselves with real world, of 2 kinds: recorders writing data from sensors to files and players reading data from files to control actuators.
Lack of ways to tinker with individual nodes and controllers "manually", so that you don't play at atoms of a dice and let sister Chance do it.
Control using interactive shell or run for specified duration.
State is automatically saved at exit and loaded at start.
Cross-platform — it is known to work on, and there are precompiled binaries for, the following targets, at least:
x86_64-unknown-linux-gnu
i686-unknown-linux-gnu
aarch64-unknown-linux-gnu
armv7-unknown-linux-gnueabihf
x86_64-apple-darwin
aarch64-apple-darwin
aarch64-linux-android
armv7-linux-androideabi
x86_64-pc-windows-msvc
/-gnu
i686-pc-windows-msvc
/-gnu
Runs even in text mode.
Small source, less than 3 times larger than this entire README.
depends on the target system, where Ælhometta is going to be run. Download the binary release or build it from source, which is available at crates.io and at GitHub.
Binaries are the quickest way to try the thing, while building from source specifically for your device paves the way for optimisations that may be absent from "generic" binaries; in particular, speed is important, — 10% does not look so small a gain when it becomes the difference between 10 and 11 days, months, years of waiting for (digital) evolution to reach some milestone... or admitting there is none. (Who's gonna wait so long?) Of course, you need certain interest to invest in that tuning.
Most of dependencies-related bothers concern libzmq
and libsodium
, the libraries that are required not by Ælhometta directly, but rather by emyzelium
crate it depends on, which is a thin wrapper around parts of ZeroMQ; such thinness is the reason we call them dependencies of Ælhometta itself hereinafter.
The simplest way is to download the latest binary release that corresponds to your architecture.
Note that it contains dependencies, — .so
files in deps/
, — which provide portability: aelhometta_using_deps.sh
should work whether these libraries are present system-wide or not. See also *-unknown-linux-gnu
-related part of build.rs
.
To install dependencies from the official repository system-wide (on Debian-based distros):
$ sudo apt install libzmq3-dev
Or build Ælhometta from source: install the same dependencies as above, then install Rust toolchain, download the source e.g. to ~/rust/aelhometta/
, and from there
$ cargo build --release
Next stop down the rabbit hole is to build from source the libzmq
dependency of Ælhometta and its core dependency that provides security, libsodium
. This has its own "costs" (of toolchain setup and compilation issues... most of the rest of this section belongs to C/C++ world, Rust is mentioned briefly in the end), but allows better device-specific tuning (e.g. --enable-opt
option of libsodium
's configure
) and lessens the number of shared libraries you have to pack with the executable; usually these 2 suffice, in comparison with additional libpgm
, libnorm
, libcomm_err
, ... from official repositories with "all-included" libzmq
. Evidently, this step is mandatory when there are no prebuilt binaries/libraries for the device at hand.
Hereinafter we assume that libsodium
and libzmq
are not installed system-wide, so that they do not interfere with building process as wrong dependencies.
$ sudo apt install cmake
(it will install make
too as recommended)
I) libsodium
libsodium-X.Y.Z[-stable].tar.gz
from https://download.libsodium.org/libsodium/releases/, extract its main content to e.g. ~/libsodium/
, and create build
dir there. You may have to add the content of src/
dir to ~/libsodium/src/
.$ sudo apt install g++
~/libsodium/build/
, run the usual:$ ../configure
$ make -j 8
As an example, from x86_64
host with Debian-based distro to aarch64
target.
$ sudo apt install g++-aarch64-linux-gnu
~/libsodium/build/
, create the file build.sh
(based on this):#!/bin/sh
export CFLAGS='-Os'
../configure --host=aarch64-linux-gnu
make -j 8
make it executable via chmod u+x
and run it.
II) libzmq
download zeromq-A.B.C.tar.gz
from https://github.com/zeromq/libzmq/releases to e.g. ~/zeromq/
, create build
dir inside
create sysroot
dir in ~/zeromq/build/
and within it
place a copy of ~/libsodium/src/libsodium/include/
dir (or symlink to it), with version.h
file from ~/libsodium/build/src/libsodium/include/sodium/
create lib
dir, copy into it libsodium.so
, libsodium.so.A
etc. — you've built those in (I) — from ~/libsodium/build/src/libsodium/.libs/
(or make it the symlink to .../.libs/
)
prepare genvars.cmake
file in ~/zeromq/build/
with the following content:
set(BUILD_STATIC OFF CACHE BOOL "")
set(BUILD_TESTS OFF CACHE BOOL "")
set(WITH_LIBBSD OFF CACHE BOOL "")
set(WITH_LIBSODIUM ON CACHE BOOL "")
set(ENABLE_CURVE ON CACHE BOOL "")
prepare toolchain file for CMake, toolchain.cmake
, in ~/zeromq/build/
(based on this, that, another one, and CMake documentation)
toolchain.cmake
:set(CMAKE_FIND_ROOT_PATH ${CMAKE_BINARY_DIR}/sysroot)
(Again, from x86_64
to aarch64
.)
set(TOOLCHAIN_PREFIX aarch64-linux-gnu)
set(CMAKE_C_COMPILER ${TOOLCHAIN_PREFIX}-gcc)
set(CMAKE_CXX_COMPILER ${TOOLCHAIN_PREFIX}-g++)
set(CMAKE_FIND_ROOT_PATH /usr/${TOOLCHAIN_PREFIX};${CMAKE_BINARY_DIR}/sysroot)
toolchain.cmake
:set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
set(CMAKE_CXX_FLAGS "-static-libgcc -static-libstdc++")
set(ENV{PKG_CONFIG_LIBDIR} ${CMAKE_BINARY_DIR}/sysroot/lib/pkgconfig)
set(CMAKE_SKIP_BUILD_RPATH true)
Checkpoint: the structure of the build tree should be
~/zeromq/build/
|--sysroot/
| |--include/
| |--lib/
|--genvars.cmake
|--toolchain.cmake
~/zeromq/build/
and run$ cmake -DCMAKE_TOOLCHAIN_FILE=toolchain.cmake -C genvars.cmake ..
$ cmake --build . --config Release -j 8
Remark on cleaning. Reset build dir to its initial state with a script such as (note bash
instead of sh
)
#!/bin/bash
shopt -s extglob
rm -rf !("sysroot"|"genvars.cmake"|"toolchain.cmake")
Remark on x86_64
-to-i686
cross builds. The toolchain is g++[-9]-multilib
; if g++-multilib
conflicts with something pertaining to ARM cross-tools, remove it, but keep its "numbered" version, e.g. g++-9-multilib
. To CFLAGS
and CMAKE_CXX_FLAGS
variables add -m32
. Also, you may need to add /usr/include/asm
symlink to /usr/include/asm-generic/
.
Remark on building libsodium
via Zig
. One more option, install Zig and follow these instructions.
Assuming (I) and (II) were successful, you then place obtained shared libraries, .so
files/symlinks of libsodium
and libzmq
, where Rust toolchain will be able to "pick up" them — into ~/rust/aelhometta/target/TARGET/release/deps/
(here TARGET
is, well, target such as aarch64-unknown-linux-gnu
). libsodium.so.M
and libzmq.so
symlinks suffice.
And finally, from ~/rust/aelhometta
$ cargo build --release [--target i686-unknown-linux-gnu]
or, if Cargo cannot choose proper linker on its own,
$ RUSTFLAGS="-C linker=aarch64-linux-gnu-g++" cargo build --release --target aarch64-unknown-linux-gnu
Since the Ælhometta executable that comes out of this is "tuned" to the built versions of the libraries, they must be shipped with it. As an example, see Linux-targeted binary releases: there is deps
dir with libsodium.so.M
and libzmq.so.N
, and the script that runs the executable tells dynamic linker/loader to look for them in that dir. In the simplest case when you run the executable from the same dir in which it resides,
$ LD_LIBRARY_PATH=deps:$LD_LIBRARY_PATH ./aelhometta
is enough.
Remark on symlinks. Use them, if you go through many iterations of building process, to avoid copying, again and again, of an updated library to the build tree of a dependent program. For instance, make ~/zeromq/build/sysroot/lib/
the symlink to ~/libsodium/build/src/libsodium/.libs/
.
is a case unsurprisingly similar to the Linux one, except for brew
instead of apt
, zeromq
instead of libzmq
, and few other distinctions.
Begin with the latest binary release for your architecture.
.dylib
files in deps/
provide portability: aelhometta_using_deps.sh
works whether these libraries are present system-wide or not.
To install dependencies system-wide, install the Homebrew package manager first, following the instructions at https://brew.sh/. Inter alia, Xcode Command Line Tools will be installed. Then
$ brew install zeromq
Or build Ælhometta from source: install the same dependencies as above, then install Rust toolchain, download the source e.g. to ~/rust/aelhometta/
, and from there
$ cargo build --release
(1st time it may fail due to absence of cc
, but macOS should then offer to install the toolchain in popup window.)
You can also build from source the libzmq
dependency of Ælhometta and its core dependency that provides security, libsodium
. For example, when $ brew install zeromq
fails (should we say breaks? spills?), or when you need better device-specific tuning (e.g. --enable-opt
option of libsodium
's configure
).
In fact, on old versions of macOS, this way is much faster than $ brew install zeromq
— we're talking minutes vs. hours here — because the latter installs 20 or so dependencies, most of which are built from source as well.
Here we consider mostly native build process, from macOS on MacBook to macOS on MacBook, save perhaps different arch (see remark at the end). Of course, there are cross alternatives, QuickEmu being one of them (take into account this manual, adding ram="<something>G"
to macos-<version>.conf
along with decreasing 8 in if [ "${RAM_VM//G/}" -lt 8 ]; then
line of /usr/bin/quickemu
, and --extra_args "-cpu host"
argument of quickemu
call).
For the sake of simplicity we assume that libsodium
and libzmq
are not installed system-wide; if they are, they may interfere with building process as wrong dependencies.
$ brew install cmake pkg-config
I) libsodium
download libsodium-X.Y.Z[-stable].tar.gz
from https://download.libsodium.org/libsodium/releases/, extract its main content to e.g. ~/libsodium/
. You may have to add the content of src/
dir to ~/libsodium/src/
from ~/libsodium/
, run the usual:
$ configure
$ make -j 8
II) libzmq
download zeromq-A.B.C.tar.gz
from https://github.com/zeromq/libzmq/releases to e.g. ~/zeromq/
, create build
dir inside
create sysroot
dir in ~/zeromq/build/
and within it
place the include
symlink to ~/libsodium/src/libsodium/include/
dir
place the lib
symlink to ~/libsodium/src/libsodium/.libs/
dir
prepare genvars.cmake
file in ~/zeromq/build/
with the following content:
set(CMAKE_PREFIX_PATH ${CMAKE_BINARY_DIR}/sysroot CACHE PATH "")
set(BUILD_STATIC OFF CACHE BOOL "")
set(BUILD_TESTS OFF CACHE BOOL "")
set(WITH_LIBSODIUM ON CACHE BOOL "")
set(ENABLE_CURVE ON CACHE BOOL "")
Checkpoint: the structure of the build tree should be
~/zeromq/build/
|--sysroot/
| |--include/
| |--lib/
|--genvars.cmake
~/zeromq/build/
and run$ cmake -C genvars.cmake ..
$ cmake --build . --config Release -j 8
Assuming (I) and (II) were successful, you then place the symlink to the built shared library libzmq.dylib
where Rust toolchain will be able to "pick up" it — into ~/rust/aelhometta/target/release/deps/
.
And finally, from ~/rust/aelhometta/
,
$ cargo build --release
The Ælhometta executable you obtain this way is "tuned" to the built versions of the libraries, libsodium.M.dylib
and libzmq.N.dylib
, so they must be present somewhere near, and you load it with DYLD_LIBRARY_PATH
pointing to their location. As an example, see macOS binary release. In particular, when you run the executable from the same dir in which it resides, and .dylib
s are in deps
subdir:
$ DYLD_LIBRARY_PATH=deps:$DYLD_LIBRARY_PATH ./aelhometta
Remark on building from x86_64
to aarch64
or vice versa. Almost the same story, but the script for building libsodium
better take form of a file, say, build.sh
inside ~/libsodium/
, with
#!/bin/sh
export CC="cc --target=aarch64-apple-darwin"
export CFLAGS="--target=aarch64-apple-darwin"
../configure --host=aarch64-apple-darwin
make -j 8
(without this duplication of --target=...
, .dylib
files are absent or "hollow"), and similarly libzmq
is built by
$ cmake -DCMAKE_C_FLAGS="--target=aarch64-apple-darwin" -DCMAKE_CXX_FLAGS="--target=aarch64-apple-darwin" -C genvars.cmake ..
$ cmake --build . --config Release -j 8
Currently, Ælhometta on such systems runs as native executable in Termux, so install it first and right away update its packages via $ pkg update && pkg upgrade
. Please remember that executables should be inside ~
of Termux (= /data/data/com.termux/files/home
), not anywhere on internal memory or external SD; $ termux-setup-storage
will provide access to the part of external SD under Android/data/com.termux/files
, which you can use to bring Ælhometta's binary to, for example, ~/aelhometta/
via $ cp
within Termux itself.
Rooting is not required for this.
Dependencies:
$ pkg install libzmq
You can get the latest binary release (which includes dependencies, .so
files in deps/
) for armv7
or aarch64
accordingly (hint: use $ lscpu
in Termux).
Alternatively, build from source; the rest of this section describes how.
$ pkg install rust
The rest of the building process is the same as in Linux case, except for $ pkg install libzmq
instead of $ sudo apt install libzmq3-dev
.
x86_64
or i686
device with Linux. There,download the latest Android NDK for Linux and extract it to e.g. ~/android-ndk-r26d/
. SDK/Studio would be redundant. Be careful to retain symlinks in .zip
of NDK during extraction, or else it will not work — C compiler cannot create executables
errors of configure
appear; produced by clang
invocation, they kind of mislead... no input files
, posix_spawn failed: exec format error
etc. This comment suggests using command line unzip
instead of built-in extractors of file managers with GUI
get Ælhometta's source
get .deb
file of libzmq
package from https://packages.termux.dev/apt/termux-main/pool/main/ and browse it as archive in Midnight Commander (or $ ar x package.deb && tar -x -f data.tar.xz
). You need libzmq.so
from usr/lib/
, copy it to .../aelhometta/target/TARGET/release/deps/
finally, from source dir,
$ RUSTFLAGS="-C linker=$HOME/android-ndk-r26d/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android21-clang++" cargo build --release --target aarch64-linux-android
(or similar for armv7[a]-linux-androideabi
instead of aarch64-linux-android
) to obtain the binary in target/TARGET/release
cross
crate is an alternative to Android NDK, it requires some dependencies in turn, Docker Engine in particular. Then you do not specify RUSTFLAGS
and replace cargo build ...
with cross build ...
Note Termux-specific library path, hardcoded into the binary, in build.rs
. There is also the option to link statically.
is simplified by Android-oriented build scripts already included into libzmq
source release, so that you need to provide only general parameters. In particular, libsodium
is built along the way. For the sake of definiteness, here we assume Debian as the host system and aarch64
as the target system.
get Android NDK for Linux (see (2) above)
download libsodium-X.Y.Z[-stable].tar.gz
from https://download.libsodium.org/libsodium/releases/ and extract its main content to e.g. ~/libsodium/
download zeromq-A.B.C.tar.gz
from https://github.com/zeromq/libzmq/releases and extract to e.g. ~/zeromq/
go to ~/zeromq/builds/android/
and look through README.md
. Accordingly, create build_aarch64.sh
with
#!/bin/sh
# export MIN_SDK_VERSION=21
export NDK_VERSION=android-ndk-r26d
export ANDROID_NDK_ROOT=$HOME/${NDK_VERSION}
export CURVE=libsodium
export LIBSODIUM_ROOT=$HOME/libsodium
./build.sh arm64
then make it executable via chmod u+x ...
and run
libzmq.so
(both linking and runtime dependency), libsodium.so
and libc++_shared.so
(runtime dependencies) from /zeromq/builds/android/prefix/arm64/lib/
The latest binary release contains the executable and the .dll
dependencies.
Or build from source, in which case comes first the captivating task to produce linking and runtime ZeroMQ libraries from libzmq
latest release by means of some C/C++ toolchain, CMake, and Sodium... if you do not obtain them, prebuilt, from somewhere else.
install Build Tools for Visual Studio including CMake; the Studio itself is not required
prepare genvars.cmake
file with the following content:
set(CMAKE_PREFIX_PATH ${CMAKE_BINARY_DIR}/sysroot CACHE PATH "")
set(BUILD_STATIC OFF CACHE BOOL "")
set(BUILD_TESTS OFF CACHE BOOL "")
set(WITH_LIBSODIUM ON CACHE BOOL "")
set(ENABLE_CURVE ON CACHE BOOL "")
download zeromq-A.B.C.zip
from https://github.com/zeromq/libzmq/releases to e.g. C:/src/zeromq-X.Y.Z/
, create build
dir in it, and go into that build/
; there, place genvars.cmake
and create sysroot
dir with include
and lib
subdirs
download libsodium-X.Y.Z[-stable]-msvc.zip
from https://download.libsodium.org/libsodium/releases/; extract include/
and x64/Release/v.../dynamic
contents into build/sysroot/include/
and build/sysroot/lib/
dirs respectively
run the "Tools-enhanced" Command Prompt, corresponding to your host and target, e.g. x64 Native Tools Command Prompt
(see VS-related shortcuts in Start Menu). Inside such prompt, change dir to C:/src/zeromq-Z.Y.Z/build/
and from there:
> cmake -C genvars.cmake ..
> cmake --build . --config Release -j 8
(Note: "bare" Command Prompt is not enough here.)
build/lib/libzmq-....lib
to zmq.lib
$ sudo apt install cmake mingw-w64
toolchain.cmake
, with the following content:set(CMAKE_SYSTEM_NAME Windows)
set(CMAKE_SYSTEM_VERSION 10.0) # 6.1 -> Windows 7, 6.2 -> Windows 8, 6.3 -> Windows 8.1, 10.0 -> Windows 10
set(TOOLCHAIN_PREFIX ARCH-w64-mingw32)
set(CMAKE_C_COMPILER ${TOOLCHAIN_PREFIX}-gcc)
set(CMAKE_CXX_COMPILER ${TOOLCHAIN_PREFIX}-g++)
set(CMAKE_RC_COMPILER ${TOOLCHAIN_PREFIX}-windres)
set(CMAKE_FIND_ROOT_PATH /usr/${TOOLCHAIN_PREFIX};${CMAKE_BINARY_DIR}/sysroot)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
set(CMAKE_CXX_FLAGS "-static-libgcc -static-libstdc++")
set(ENV{PKG_CONFIG_LIBDIR} ${CMAKE_BINARY_DIR}/sysroot/lib/pkgconfig)
where ARCH
is either x86_64
or i686
genvars.cmake
file withset(BUILD_STATIC OFF CACHE BOOL "")
set(BUILD_TESTS OFF CACHE BOOL "")
set(ZMQ_CV_IMPL none CACHE STRING "")
set(WITH_LIBBSD OFF CACHE BOOL "")
set(WITH_LIBSODIUM ON CACHE BOOL "")
set(ENABLE_CURVE ON CACHE BOOL "")
download zeromq-A.B.C.tar.gz
from https://github.com/zeromq/libzmq/releases to e.g. ~/zeromq-X.Y.Z/
, create build
dir in it, and go into that build/
; there, place toolchain.cmake
, genvars.cmake
, and create sysroot
dir
download libsodium-X.Y.Z[-stable]-mingw.tar.gz
from https://download.libsodium.org/libsodium/releases/ and extract libsodium-win...
content (bin/
, include/
, lib/
) into build/sysroot/
at last, from build/
:
$ cmake -DCMAKE_TOOLCHAIN_FILE=toolchain.cmake -C genvars.cmake ..
$ cmake --build . --config Release -j 8
Now zmq.lib
(native) / libzmq.dll.a
(cross) — the linking library — should appear in build/lib/
; and libzmq-....dll
(native) / libzmq.dll
(cross) — the runtime library — in build/bin/
.
Then, — assuming that Rust toolchain is installed and Ælhometta's source is downloaded to .../aelhometta/
, — copy linking library to .../aelhometta/target/TARGET/release/deps/
(where TARGET
is either x86_64-pc-windows-msvc
(native) / x86_64-pc-windows-gnu
(cross) or i686-pc-windows-...
).
If the compilation is a cross one, add required targets:
$ rustup target add x86_64-pc-windows-gnu i686-pc-windows-gnu
Produce the binary:
$ cargo build --release [--target ARCH-pc-windows-gnu]
from Ælhometta's source dir.
Lastly, before running the binary, you need to place the following .dll
-s in the dir with it:
libzmq.dll
— from the output of libzmq
building process described above
libsodium-MN.dll
— from Sodium release, which you also have got during a building process
Additionally, in case of cross build:
libwinpthread-1.dll
and libgcc_s_seh-1.dll
(64-bit) / libgcc_s_[dw2|sjlj]-1.dll
(32-bit) — from wherever, e.g. from /usr/x86_64-w64-mingw32/lib/
and /usr/lib/gcc/x86_64-w64-mingw32/9.3-posix
or WinLibs toolchain for MinGW-w64If there are no prebuilt binaries and emulation via e.g. QEMU is not an option, what remains is to build from source. Install Rust toolchain, download Ælhometta's source to .../aelhometta/
, and from there
$ cargo build --release
should proceed almost to the end... when linking stage fails with -lzmq not found
-like error. Well, now you need libzmq
, both linking- and runtime-dependencies. Search at your OS repositories, Internet, try to build it, in turn, from source (see Linux case).
This is not impossible, because all cases above were here before. The more platforms are covered, the more hookable ones for whom they are native will be hooked and make the first tiny step that follows:
By now, after Deployment, you have Ælhometta's executable and dependencies on the target system. On Linux and macOS (and Android) you can also run the aelhometta_using_deps.sh
script from binary release to make the executable rely on shared libraries in deps/
instead of system ones, — the result should be the same. For the sake of definiteness, we assume that you run the executable itself.
Since the state is saved in a file whose size can reach hundreds of MB, consider making a symbolic link to the binary, or placing that binary, at another location, perhaps a ramdisk, and running it from there:
user@pc:/mnt/ramdisk/aelhom$ ./aelhometta
or calling it from there when symlinks are impossible (e.g. on Windows or Android):
user@pc:/mnt/bigdisk/aelhom$ ~/rust/aelhometta/target/release/aelhometta
Anyway, then you see the shell, not of your OS (prompt $
), but of Ælhometta itself (prompt Æ
):
Loading Ælhometta... Cannot load Ælhometta: Cannot open 'aelhometta.bin': No such file or directory (os error 2)
Using new default one
Loading Commander... Cannot load Commander: Cannot read from 'commander.json': No such file or directory (os error 2)
Using new default one
Lastick 1969.12.31 Wed 23:59:59.000 UTC - Age 0 : Nodes 0 | Controllers 0 | Limit =2^22 : Memory ~ 72 MiB
"?" — see the list of available commands
Æ █
We are now at what may be called Abiohazard Level 0... since nothing has happened yet. Which is dull, so let us go further.
where ælhometta is contained in the memory of your computer, does not reach other computers and does not interact with outside world.
Type in the following commands:
Æ anc b 5
Æ r
(which is the same as
Æ ancestor b 5
Æ run
but with aliases). That is, we introduce an ancestor — of type B
, with spacity 5 — and let the environment tick on its own. At each tick, a controller is chosen randomly from the set of all controllers, processes the content of the node it currently "looks" at, and moves to the next node.
The charts on the right visualise relative frequencies of executed commands and other instructions. In the beginning, NextOptuid
should be the most frequent one.
Wait for half a minute or so, then press (almost) any key to stop ticking. There should be many more nodes and controllers, like this:
[0:00:29] Age 1934061 : Nodes 3645220 | Controllers 14159
[0:00:30] Age 1965719 : Nodes 3702543 | Controllers 14346
Lastick 2024.01.16 Tue 12:00:30.292 UTC - Age 1974509 : Nodes 3720924 | Controllers 14407 | Limit 4194304=2^22 : Memory ~ 192 MiB
Æ █
Observe more statistics:
Æ stat cgen
Æ stat ctick
Æ stat chan
Æ stat cont
Æ stat run
By now, content statistics (Æ stat cont
) mostly contains 0s, because only a fraction of all available commands and instructions are present in ancestor B and its exact replicas-descendants. Let's introduce random mutations, or glitches:
Æ glitch back 0.001
Æ glitch repl 0.03
Æ glitch cons 2.5e-2
Then again
Æ r
After a while, stop and compare new content statistics to the old one.
To see how the structure of chains has changed in details, pick up random controller's uid — CTRL-UID
(8 hexadecimal digits) — by
Æ rand ctrl
and see that controller's state:
Æ sct CTRL-UID
The start node of the chain to which that controller is attached is given under ChainStart
, we denote it by NODE-UID
. Follow the chain from the start:
Æ ss NODE-UID
Disappointment: the sequence will be almost the same as that of the original entity, described in Ancestors. Has evolution stuck?
where ælhometta interacts with other computers (or rather ælhomettas on them), but not (directly) with outside world.
Assuming 2 ælhomettas running on 2 devices are at Abiohazard Level 1 already, you need additionally
from your side:
from other device's, or its owner's, side:
Here's how to do it quickly:
$ pip3 install -U pyzmq
import zmq
public, secret = zmq.curve_keypair()
print("Public key:", public.decode("ascii"))
print("Secret key:", secret.decode("ascii"))
$ pkg install libzmq
$ curve_keygen
If on Linux, set up Your Onion Service, skipping Step 1, because you do not need a web server. Verify that Tor service runs via $ systemctl status tor@default
and obtain onion address from hostname
file in the dir specified with HiddenServiceDir
.
Install Tor:
$ brew install tor
Go to /usr/local/etc/tor/
and make the copy of torrc.sample
named torrc
. Open torrc
in any text editor, uncomment and change two HiddenService
-related lines to something like:
HiddenServiceDir /usr/local/var/lib/tor/hidd_aelhom_srv1/
HiddenServicePort 60847
Now add lib/tor
directory subtree to /usr/local/var/
if it is not already there. And then
$ brew services start tor
Soon .../hidd_aelhom_srv1/
is created by Tor service (verify that it runs via $ brew services info tor
). The hostname
file within contains onion address. Backup other files too, privately.
$ pkg install tor
$ pkg install termux-services
Open /data/data/com.termux/files/usr/etc/tor/torrc
and edit the HiddenService
-related lines:
HiddenServiceDir /data/data/com.termux/files/usr/var/lib/tor/hidd_aelhom_srv1/
HiddenServicePort 60847
Start Tor service:
$ sv-enable tor
$ sv up tor
(there is no systemctl
or service
). Wait for .../hidd_aelhom_srv1/
dir to be created by the service (verify that it runs via $ sv status tor
) and obtain the onion address from hostname
file. Keep the copies of other files as well, privately.
Tor Expert Bundle for Windows should suffice. Unpack its content, run Command Prompt as administrator, and, from .../bundle/tor/
,
> tor --service install
This, inter alia, introduces tor
subdir of C:/Windows/ServiceProfiles/LocalService/AppData/Roaming/
dir. Create torrc
file there with the content such as
HiddenServiceDir C:/Windows/ServiceProfiles/LocalService/AppData/Roaming/tor/hidden_service1
HiddenServicePort 60847
From that administrator's command prompt,
> tor --service stop
> tor --service start
Soon the HiddenServiceDir
-specified dir appears, with some files inside. In particular, hostname
contains the onion address. (Other files are important too, keep their copies privately.)
As usual, a secret key must be known only to its owner.
Default port is 60847
(0xEDAF
).
When these strings and numbers have been established, on your device do
Æ peer secret MySecretKeyMySecretKeyMySecretKeyMySecre
Æ peer port 60847
Æ peer share size 10000
Æ peer share interval 2000000
Æ peer expose
Other participant, on their device, does
Æ peer secret TheirSecretKeyTheirSecretKeyTheirSecretK
Æ peer port 60847
Æ peer share size 20000
Æ peer share interval 1000000
Æ peer expose
Now both of you have to subscribe to each other's publication. You do
Æ peer connect TheirPublicKeyTheirPublicKeyTheirPublicK TheirOnionAddressTheirOnionAddressTheirOnionAddressTheir 60847
And they do
Æ peer connect MyPublicKeyMyPublicKeyMyPublicKeyMyPubli MyOnionAddressMyOnionAddressMyOnionAddressMyOnionAddress 60847
Do not forget to run both instances:
Æ r
In a few seconds, if connection is established, your ælhometta will have access to arrays of 64-bit integers transmitted by their ælhometta. Check it on your side with
Æ peer
— there should be 1 other peer, its Share size
should be non-zero, and its Last update
should be some recent instant (typically few seconds in the past). To check the data received,
Æ peer ether TheirPublicKeyTheirPublicKeyTheirPublicK 0 100
Whether your and their ælhomettas actually use the integer values they exchange is a different question (hint: probably they do not, in the beginning).
Remark 1. It is possible for "other" device to be... your device again, i.e. both ælhomettas run on the same computer. Only assign different HiddenServicePort
-s in /etc/tor/torrc
to corresponding hidden services.
Remark 2. Some peers are already out there, although "24/7 online" is not guaranteed... at all. Try to connect to them:
Æ peer connect USBD7[O^8L[}saCh+6U#}6wie4oAedZ4#W4!b%LI t3kauc3flp2bpv3mv7vtnedfu6zrho3undncccj35akpuyjqqydwvfyd 60847
Æ peer connect &i!VkHl[]m.c!^j0D%i)&4#[u5b(a=QCdZ9C0$p{ yhel64h6cjab75tcpncnla2rdhqmxut2vtitywhbu7bpjh4hfhp6hnid 60847
These two are just demo ones, maintained by us for the sake of network functionality testing. Someday, more may be listed in Networking.
where ælhometta interacts with other ælhomettas and with outside world.
So, your ælhometta runs at Abiohazard Level 2. We are going to add audio interaction with its surroundings, assuming that the device it runs on has at least a speaker and a microphone.
We also need 2 intermediate applications, in essence a player and a recorder. In view of very low requirements to their functionality, our "player" will be called buzzer and our "recorder" will be called hearer.
buzz.i64
and adjusted. For the sake of simplicity, only the lowest byte of an int64 value is used, others are assumed to be 0. That is, if the byte content read from the file is2A 00 00 00 00 00 00 00 98 00 00 00 00 00 00 00 ...
then 1st tone will be played with volume 2Ah
= 42 (out of 256), the 2nd tone — with volume 98h
= 152, and so on. Next time, the content read is
01 00 00 00 00 00 00 00 E7 00 00 00 00 00 00 00 ...
and volumes change: 1st tone becomes almost muted (volume 1/256), 2nd tone becomes louder (volume 231/256).
hear.i64
, overwriting previous levels. (Note: for now, it is irrelevant whether these bands have any relation to the frequencies of the buzzer.) As before, we assume 1-byte resolution.For example, someone plays trombone near the microphone. In the sound recorded, low frequencies have larger amplitudes, while amplitudes of high frequencies are small. Therefore, the hearer writes to hear.i64
data such as
DE 00 00 00 00 00 00 00 ... 20 00 00 00 00 00 00 00
Then they put trombone away and begin to whistle. Now the spectrum is mirrored: low frequencies carry less energy, high ones carry more,
14 00 00 00 00 00 00 00 ... B3 00 00 00 00 00 00 00
Look for quick-and-dirty Python implementation of such applications in Input/Output, or write them yourself, or use the functionality of more sophisticated Digital audio editors.
Let there be 12 frequencies of the buzzer and 14 bands of the hearer. Put the buzzer and the hearer to the dir with Ælhometta's executable and start them (the buzzer is silent, because buzz.i64
does not exist yet.)
Add the mapping from the file hear.i64
to the range of 14 integer channels of ælhometta, beginning with the 50th one:
Æ iomap in add 50 14 1500000 ./hear.i64
Analogously, add the mapping from the range of 12 integer channels, beginning with the 70th one, to the file:
Æ iomap out add 70 12 1000000 ./buzz.i64
And then run,
Æ r
If integer channels 70–81 contain large enough numbers, you should hear some... buzz. When these integers change, the buzz changes as well, with some delay. On the other hand,
Æ eth int 50 14
should result in something similar to
50 184=B8h
51 255=FFh
52 103=67h
53 67=43h
54 31=1Fh
55 48=30h
56 =29h
57 21=15h
58 9=9h
59 9=9h
60 10=Ah
61 6=6h
62 2=2h
63 0=0h
An obvious "dirty trick" to skip waiting for "evolution" to fill buzzer-related integer channels with non-zero values is to "short-circuit" them with hearer-related channels: remove the "out" mapping above via
Æ iomap out del 0
and replace it with
Æ iomap out add 50 12 1000000 ./buzz.i64
Behold! in case you have included hearer's channels to what your ælhometta shares as a peer, anyone who subscribes to it will receive (very coarse) spectrum of the soundscape around your device, turning it into a bug.
Finally,
Æ q
saves ælhometta's and commander's states to the current dir, and exits the shell (qq
exits without saving anything).
When you run the program next time, everything is restored:
Loading Ælhometta... OK
Loading Commander... OK
Lastick 2024.01.16 Tue 23:57:21.584 UTC - Age 1974509 : Nodes 3720924 | Controllers 14407 | Limit 4194304=2^22 : Memory ~ 192 MiB
"?" — see the list of available commands
Æ █
...Rest assured that it has been indeed only a quickstart.
The commander keeps few settings related to the format of data displayed while ælhometta runs — Æ sets
shows them, Æ set ...
sets them), — and shell history (Æ history ...
displays recent or entire history). The shell itself is a basic command line interface.
Put differently, commander and shell are front-end in comparison to back-end of ælhometta. While the shell awaits your input,
Æ █
— ælhometta is paused, there are no ticks.
Settings of commander have no effect on how ælhometta behaves, except speed: when show_ticks
is true
, screen output of every tick slows it down significantly (2 times or even more).
Æ help
, or Æ ?
for short, provides information about all commands or about given one. For instance,
Æ ? peer
displays descriptions and parameters of all subcommands concerning network configuration.
The state of commander is saved to commander.json
.
Most of them have shorter aliases.
quit
or exit
or end
or bye
(and save state)quitquit
or ... byebye
(do not save state)help
=
(repeat last command),ancestor
(introduce one, with parameters)run
(until keypress, show updated counters every second)tick
(one step of a controller)glitch
(probabilites and counters of mutations)shownode
(single node)showctrl
(state of controller)showseq
(forward sequence of nodes)prevnodes
(nodes that have given next one)backtrace
(backward sequence of nodes)ether
(2 global arrays of optuids and integers)random
(uid of random entity),statistics
cleanse
(part or all of state)commandswitch
(use to NOP commands)changelim
(adjust maximum number of entities)peer
(networking)iomap
(I/O),showsizes
(predefined sizes of some arrays)settings
(view commander settings)set
(change commander settings)history
(of commands)about
(regarding the program in general)You have probably noticed it, because it occupies the right part of the screen and slightly changes each second... It, too, displays statistics, — of executed commands, construction instructions, and the ratio of main/alternative branch choices, — but over short interval of immediate past instead of over entire past, which Æ stat run
does. The example in synopsis shows that the most frequent Command during the last 16 seconds was NextOptuid
.
To choose command (or construction instruction) and see its full name and exact count of occurrences over time window, press ← → (or ↑ ↓) while run
ning. Any other key will stop run
ning and return you to shell prompt Æ
.
Adjust the appearance of this window via show_freqs
, freqs_window_margin
, freqs_comm_str_len
, and freqs_cons_str_len
settings of commander. freqs_interval
specifies the duration, in seconds, of the interval; default is 16.
As glitches introduce other commands and they percolate into chains of nodes, their frequencies increase. Moreover, some evolutionary shifts are expected to change the distribution so that one or another group of commands dominates, hinting at what "species" is more successful. The opposite implication is not guaranteed: e.g. distributions of chemical elements in mice and men are not very different, as well as distributions of these elements inside geosphere (inclusively interpreted) 10 million years ago and today.
In the shell mode, exit status of the application is either 0 (success) or 2 (critical error). As for 1,
Most of the time, your ælhometta will run without any interference from you, hours after hours, maybe months after months, until you interrupt its silent course by pressing (almost) any key.
For the sake of resiliency, however, we recommend to backup the state regularly.
These approaches combine when you run the application with single argument instead of no arguments, that argument being the requested duration of running in seconds:
$ ./aelhometta 43200
There is no shell in this mode. As soon as the duration ends (12 hours in this example) or (almost) any key is pressed, the application exits and saves the state. In the latter case, exit status is set to 1.
To run Ælhometta indefinitely with backup once per hour, place such call into a loop:
#!/bin/sh
while true; do
./aelhometta 3600
if [ $? = 1 ]; then
echo "Halt due to keypress"
break
fi
done
Be aware that this loop will continue in case of the application's critical error (exit status 2).
To run Ælhometta for one day each week, place the /path/aelhometta 86400
call into /etc/cron.weekly/
.
Whatever the scenario of this kind is, it may help to imagine your character in the scenario being — absent, far away, gone, you name it, except for brief appearance in the beginning. Which is how the things are going to be anyway...
Beside SSH access to remote computer where Ælhometta has been installed, you need a "persistent detached terminal", provided by terminal multiplexer like Byobu or tmux or GNU Screen; install it there as well.
Simply run Ælhometta in virtual terminal (preferably with regular backup as described above) and detach from that terminal; later, attach to it again.
X11 connection or forwarding is not needed.
Only now we proceed behind the curtain of superficially observable behaviour...
...Perhaps the better way to acquaint yourself with it is to simply read through src/aelhometta.rs
. The namesake structure verbatim from there:
pub struct Ælhometta {
// Serialisable part
max_num_chains_binlog: u8,
new_node_uid: Uid,
nodes: HashMap<Uid, Node>,
nodes_historing: Vec<Optuid>,
i_nodes_historing: usize,
new_controller_uid: Uid,
controllers: HashMap<Uid, Controller>,
controllers_historing: Vec<Optuid>,
i_controllers_historing: usize,
commandswitch: u128,
ether_optuids: Vec<Optuid>,
ether_integers: Vec<Integer>,
age: u128,
ut_last_tick: i64,
spaces_count: u128,
branches_main_count: u128,
branches_alt_count: u128,
commands_count: HashMap<Command, u128>,
constructions_count: HashMap<Construction, u128>,
glitch_background_prob: f64,
glitch_background_count: u128,
glitch_replicate_prob: f64,
glitch_replicate_count: u128,
glitch_construct_prob: f64,
glitch_construct_count: u128,
// Peer-related
share_size: usize,
share_interval: i64,
ut_last_share: i64,
shares_count: u128,
secretkey: String,
port: u16,
torproxy_port: u16,
torproxy_host: String,
exposed: bool,
other_peers: Vec<OtherPeer>,
whitelist: HashSet<String>,
in_permitted_before_num: u64,
in_attempted_before_num: u64,
ut_last_disconnect: i64,
ut_last_permit: i64,
ut_last_attempt: i64,
// IO-related
output_mappings: Vec<IntegersFileMapping>,
input_mappings: Vec<IntegersFileMapping>,
// Non-serialisable part
max_num_chains: usize,
max_num_chains_binmask: usize,
rng: ThreadRng,
efunguz: Option<Efunguz>,
}
where
pub type Uid = u32;
pub type Optuid = Option<Uid>;
pub type Integer = i64;
Single-linked units of information. See A node.
Automata that move along chains of nodes and act accordingly to the content of these nodes. See A controller.
Global arrays of opaque node addresses and integer values, accessible to all controllers.
Specifies network identity of ælhometta and its interaction with other ælhomettas across the network (Tor, to be more precise). See Networking.
Maps continuous ranges of integers ether from (input) and to (output) files controlled by other applications. Those applications, in turn, connect the files with sensors and actuators in outside world. See Input/Output.
i
-th bit of commandswitch
, when 0
, NOPs Command with index i
, i.e. replaces its execution by... nothing (considered successful). Since GetExecFromOptuid
and SetOptuidFromExec
commands deviate from descriptive/executive separation principle, they are NOPped by default (and you can change that).
age
increments each tick.
ut_last_tick
, as well as other ut_...
s, is measured in microseconds since Unix epoch.
spaces_count
, branches...count
, commands_count
keeps track of how many times each respective content has been executed. constructions_count
counts Construction
instructions that have occured while executing Construct
commands.
The state of ælhometta is saved to aelhometta.bin
, see src/aelhometta/serbin.rs
. This serialisation, though binary and with tricks such as LEB128, is not minimal in size; classical ZIP, for instance, nearly halves it.
pub struct Node {
b_content: u8,
b_next: Uid,
b_altnext: Uid
}
A node has content, which describes what the controller should do, and 2 pointers: (main) next node and alternative next node, which describe to what node the controller moves after this one. The value of such pointer (it may be empty) is also called optuid (optional unique identifier).
The content is represented as standard byte, see impl ToBits<u8> for Content
and impl OtBits<u8> for Content
in src/aelhometta.rs
.
There are 4 types of content:
NOP, placeholder, does nothing. But it can be replaced with something.
This type of node is the only one providing non-linearity of execution path, if GetExecFromOptuid
and SetOptuidFromExec
commands are NOPped (which is the default).
If the success
flag is true
, execution pointer moves to the main next node.
If the success
flag is false
, execution pointer moves to the alternative next node.
Specifies an action that controller should perform. May change controller's state — registers, flags, arrays of pointers, integers, and so on (see A controller); may also change the state of the entire ælhometta.
If a command fails (e.g. division by 0, overflow, index out of bounds), the success
flag will be set to false
. However, some "junk" may be present where result should have been, and what is "failure" and what is not depends. Test...
commands affect success
flag too.
"Uncrashability" principle (single chemical reaction cannot crash the universe) permeates what commands do and is familiar to everyone in the trade.
Again, src/aelhometta/tick.rs
provides more complete picture. The following list heavily relies on self-explanatory property of... words.
Nullary operators with integer result placed in the integer register:
RandomContent
("valid" integer representing one of Content
variants)RandomInteger
(all 64 bits are random)ZeroInteger
Unary operators on integer register replacing its value with the result:
Abs
BitNot
Decrement
Increment
Negate
ShiftDown
ShiftUp
Sign
Square
Binary operators on integer register as the 1st operand and selected integer as the 2nd one, the result goes to integer register:
Add
BitAnd
BitOr
BitXor
Divide
Multiply
Remainder
Subtract
Convert integer register to index of selected element in certain array...
IntegerToDataOptuidIndex
IntegerToIntegerChannel
IntegerToIntegerIndex
IntegerToOptuidChannel
IntegerToOptuidIndex
IntegerToPeer
...and back:
DataOptuidIndexToInteger
IntegerChannelToInteger
IntegerIndexToInteger
OptuidChannelToInteger
OptuidIndexToInteger
PeerToInteger
Convert integer register to success flag and vice versa (usual int ↔ bool semantics):
IntegerToSuccess
SuccessToInteger
Operations with node pointed to by data_optuids[i_data_optuid]
:
Insert
Read
Remove
Skip
Write
Tests, affecting success
flag:
TestDataOptuid
(true if data_optuids[i_data_optuid]
points to existing node)TestIntegerNegative
TestIntegerNonZero
TestIntegerPositive
Creation of a new chain and, if it is active, of a controller attached to it:
Construct
NewChainAddInteger
NewChainAddIntegerChannel
NewChainAddOptuid
NewChainAddOptuidChannel
NewChainDetach
NewChainInitActive
NewChainInitPassive
Replicate
Construct
and Replicate
work with data_optuids[i_data_optuid]
of a controller, consequtively reading the node where it points and advancing it to the next node. In a sense, they are "shortcuts", since they cut some corners of Ælhometta's artificial biochemistry.
NewChainAdd...
actually add respective element to the controller attached to the active chain being created.
A new chain is not empty, it has Space
node at the beginning.
Move to next/previous element of corresponding array:
NextDataOptuid
NextInteger
NextIntegerChannel
NextOptuid
NextOptuidChannel
NextPeer
PreviousDataOptuid
PreviousInteger
PreviousIntegerChannel
PreviousOptuid
PreviousOptuidChannel
PreviousPeer
Read/write "optuids ether" — global array of (some) optuids accessible to all controllers of ælhometta, with destination/source respectively being currently selected optuid of a controller, and the index of ether's element being provided by optuid_channels[i_optuid_channel]
:
ReceiveOptuid
TransmitOptuid
Read/write "integers ether". Destination/source is integer register, ether's element is given by integer_channels[i_integer_channel]
, and the ether is that of other peer when i_peer
is not 0. In the latter case, the ether is read-only (only Receive...
works) since it has been obtained from other peer via publish-subcribe pattern:
ReceiveInteger
TransmitInteger
Exchange data between integer register and integers array, and between advancing and non-advancing optuids arrays:
GetIntegerFromIntegers
SetDataOptuidFromOptuid
SetIntegersFromInteger
SetOptuidFromDataOptuid
Restart controller:
Restart
Copy selected optuid to exec optuid (this command is unique in "forcing" the optuid of next execution node regardless of current node's main and alternative pointers) and vice versa. These two are NOPped by default:
GetExecFromOptuid
SetOptuidFromExec
Note that it is impossible to convert Optuid to Integer and vice versa, in accordance with "opaque addressing" principle.
Also, note redundancy. For example, NextIntegerChannel
does almost what IntegerChannelToInteger
, Increment
, IntegerToIntegerChannel
do.
Comes into play only when a controller constructs an executive (active) chain, — when Construct
command is executed, which works with the stack of nodes' uids.
AltNext
turns on "alternative next" mode until NextToStored
instruction, which will then set the alternative next pointer of currently added node instead of the main one and revert to "main mode"Discard
removes topmost uid from the stackNextToStored
sets main or alternative next node pointer of the currently added node to the topmost uid of the stackRestore
changes "construction pointer" from currently added node to the topmost uid of the stackStore
pushes the uid of the currently added node to the stackSwap
swaps 2 topmost uids on the stackTerminus
interrupts the Construct
(but not Replicate
) command at currently added nodeStill we lack any quantification or algebraisation of the intricate ways in which the choice of encoding by natural numbers affects the evolutionary perspectives of such system... For example, variants of enum Command
are numbered in alphabetical order: ReceiveOptuid
is 52
, Remainder
is 53
, similarly, AltNext
is 0
and Terminus
is 6
; why not contrariwise? This is so arbitrary, so torn away from underlying levels, as if chemistry were decoupled from physics, that evolution may be too weak to fill the gap... with what? Even that is innominabilis today.
Remark. There were an older version of Ælhometta without automatic conversion between Content
and Integer
, with 2 respective registers instead of 1 Integer
now. That approach implied too much opacity of the numerical level as seen from the instruction level, and was abandoned. We leave its resurgence as "an exercise to the reader" (see also Panmutations).
Executive entity. CPU is another analogy.
Again, verbatim from the source:
pub struct Controller {
chain_start_optuid: Optuid,
exec_optuid: Optuid,
data_optuids: Vec<Optuid>,
i_data_optuid: usize,
new_chain_optuid: Optuid,
new_controller: Option<Box<Self>>,
registers: Registers,
flags: Flags,
optuids: Vec<Optuid>,
i_optuid: usize,
integers: Vec<Integer>,
i_integer: usize,
optuid_channels: Vec<usize>,
i_optuid_channel: usize,
i_peer: usize,
integer_channels: Vec<usize>,
i_integer_channel: usize,
generation: u128,
ticks: u128
}
For now, plural in Registers
and Flags
is redundant, because
pub struct Registers {
integer: Integer
}
pub struct Flags {
success: bool
}
exec_optuid
is basically the instruction pointer, chain_start_optuid
keeps its initial value for the sake of Restart
command. Controller "dies" as soon as exec_optuid
becomes None
or points to non-existing node.
new_chain_optuid
and new_controller
are used for replication of a passive/descriptive chain and construction of an active/executive chain (one with a controller attached to it).
data_optuids[i_data_optuid]
advances automatically to the next node at read/write operations realised by Read
, Write
, and Insert
commands, which use this optuid. Construct
and Replicate
commands advance it too, "until the end".
i_data_optuid
, i_optuid
, i_integer
, i_optuid_channel
, and i_integer_channel
are indices of currently selected elements of respective arrays: data_optuids[]
etc.
i_peer
, when 0, means "this one", otherwise it means other peers, enumerated from 1. It affects interpretation of integer_channels
, e.g. Transmit
command works only for this peer.
generation
is set once at the creation of a controller to generation
of the constructing controller + 1. ticks
is 0 at controller's creation and increments at each its... tick.
The simpler one, ancestor B, consists of
Construction(Store),
// Replicate scheme
Command(SetDataOptuid),
Command(NextOptuid),
Command(NewChainInitPassive),
Command(PreviousOptuid),
Command(Skip),
Command(Replicate),
Command(NewChainDetach),
// Build constructor from scheme
Command(SetDataOptuid),
Command(NextOptuid),
Command(NextOptuid),
Command(NewChainInitActive),
Command(Skip),
Command(Construct),
Command(PreviousOptuid),
Command(NewChainAddOptuid),
Command(NewChainDetach),
Construction(NextToStored),
Construction(Discard)
Constructor's executive chain, which is almost the same as the scheme, except for the absence of Construction nodes and, accordingly to instructions from these nodes, the last node pointing to the implicit 1st Space
node: this chain is a loop. (Well, not exactly: from 1st generation onward. In 0th generation, which is the ancestor itself, the loop is open.)
Constructor's controller attached to constructor's executive chain.
Note the classical double interpretation of data: first is linear verbatim copying, second is non-linear construction that expands the meaning of special (Construction:...
) units. Will an evolution keep the separation line between them clear?
Spacity parameter that you provide at introducing this ancestor — as in Æ anc b 5
— is the number of Space
-s (5 in this case) inserted before every non-Construction
node. They are nothing for mutations to replace with something without the need to destroy the original sequence of actions.
This ancestor utilises only a small fraction of available Content
-s. There is also slightly more complicated, with larger assortment of Content
-s, ancestor A: in addition to constructor, it has jumbler that scans the same scheme constructor uses and randomly changes it (by replacements, insertions, deletions). In other words, jumbler interiorises mutations. However, in Ælhometta, the standard way to introduce Mutations is "external" and more global at that.
See also src/aelhometta/ancestors.rs
.
The following chains have been extracted at random from an ælhometta with mutations and tiny input mapping from microphone, during 3 days of running. Maximum allowed number of nodes is 224=16777216, same for controllers (although there never have been more than 3×105 of the latter).
Space
Command:NextDataOptuid
Space
Space
Command:ZeroInteger
Space
Command:NewChainAddOptuidChannel
Space
Command:Abs
Command:Remove
Command:SetDataOptuidFromOptuid
Command:NewChainInitActive
Command:Write
Command:PreviousIntegerChannel
Command:NewChainAddOptuid
Command:NextPeer
Command:Construct
Command:NewChainDetach
Command:GetExecFromOptuid
Command:IntegerIndexToInteger
Command:SetIntegersFromInteger
Command:ZeroInteger
Space
Command:NextIntegerChannel
Command:OptuidChannelToInteger
Command:Construct
Space
Command:Read
Command:Negate
×
Space
Space
Space
Space
Space
Command:BitNot
Space
Command:IntegerToOptuidChannel
Command:OptuidChannelToInteger
Command:SetOptuidFromExec
Command:SetDataOptuidFromOptuid
Command:NewChainInitActive
Command:Subtract
Command:GetIntegerFromIntegers
Command:NewChainAddOptuid
Command:NextPeer
Command:Construct
Command:NewChainDetach
Command:GetExecFromOptuid
Command:GetExecFromOptuid
Command:BitAnd
Command:ZeroInteger
Space
Command:GetExecFromOptuid
Command:Write
Command:PreviousDataOptuid
Space
Command:SetDataOptuidFromOptuid
Command:NewChainInitActive
Command:Write
Command:IntegerToIntegerIndex
Command:NewChainAddOptuid
Command:ShiftUp
Command:Construct
Command:NewChainDetach
Command:GetExecFromOptuid
Space
Command:SetIntegersFromInteger
Command:ZeroInteger
Space
Command:GetExecFromOptuid
Command:Increment
Command:PreviousOptuidChannel
Space
Command:GetIntegerFromIntegers
Command:NextIntegerChannel
Command:TestIntegerNegative
Command:NewChainInitActive
Command:TestIntegerNonZero
Command:BitOr
Command:NewChainAddOptuid
Command:GetExecFromOptuid
Command:NewChainAddOptuid
Command:SetIntegersFromInteger
Command:PreviousDataOptuid
Command:Restart
Command:RandomInteger
Command:NewChainAddOptuidChannel
Command:NewChainAddInteger
Command:BitNot
Command:IntegerToDataOptuidIndex
Command:RandomContent
Space
Command:IntegerToIntegerIndex
Command:Remainder
Command:Remove
Space
Command:TransmitOptuid
Command:OptuidIndexToInteger
Command:OptuidChannelToInteger
Command:PreviousOptuid
Command:Skip
Command:Replicate
Command:Divide
Space
×
Command:SetDataOptuidFromOptuid
Command:NewChainInitActive
Command:BitNot
Command:Skip
Command:NewChainAddOptuid
Command:NextPeer
Command:Construct
Command:NewChainDetach
Command:Write
Command:BitAnd
Command:Add
Space
Command:Write
Command:Read
Space
Command:SetDataOptuidFromOptuid
Command:NewChainInitActive
Command:Write
Command:Remainder
Command:NewChainAddOptuid
Command:Divide
Command:Construct
Command:NewChainDetach
Command:GetExecFromOptuid
Space
Command:Read
Command:NewChainDetach
Command:Read
Command:ZeroInteger
Command:Increment
Command:PreviousOptuidChannel
Space
Branch
Command:RandomContent
Command:TestIntegerNegative
Command:Construct
Space
Command:BitOr
Command:TestIntegerNegative
Command:PreviousInteger
Command:Remove
Command:OptuidChannelToInteger
Command:PreviousDataOptuid
Command:NextOptuidChannel
Command:SetIntegersFromInteger
Command:NewChainAddOptuidChannel
Command:NewChainAddOptuidChannel
Command:NewChainInitPassive
Command:IntegerToDataOptuidIndex
Command:Write
Space
Command:IntegerToIntegerIndex
Command:Restart
Command:Remove
Space
Command:Decrement
Command:TransmitOptuid
Command:OptuidIndexToInteger
Command:TestIntegerPositive
Command:NextDataOptuid
Command:Abs
Command:Divide
Space
×
Space
Command:OptuidIndexToInteger
Space
Command:NextInteger
Space
Space
Command:NewChainAddOptuid
Command:PreviousDataOptuid
Command:OptuidChannelToInteger
Command:IntegerToSuccess
Command:SetDataOptuidFromOptuid
Command:NewChainInitActive
Command:Write
Command:Replicate
Command:NewChainAddOptuid
Command:Construct
Command:NewChainDetach
Command:TestDataOptuid
Command:SetOptuidFromExec
Command:TestIntegerNegative
Command:IntegerToPeer
Command:NewChainDetach
Command:GetExecFromOptuid
Command:Increment
Command:PreviousDataOptuid
Command:SetDataOptuidFromOptuid
Command:TestIntegerNonZero
Command:Write
Command:GetIntegerFromIntegers
Command:NewChainAddOptuid
Command:Divide
Command:Construct
Command:NewChainDetach
Command:GetExecFromOptuid
Command:IntegerToDataOptuidIndex
Command:RandomContent
Space
Space
Command:SetOptuidFromExec
Command:Construct
Command:ShiftDown
Space
Command:NextIntegerChannel
Command:TestIntegerNonZero
Command:NewChainInitActive
Space
Command:BitOr
Command:Construct
Command:Decrement
Command:Remove
Command:SetIntegersFromInteger
×
Space
Space
Space
Space
Space
Space
Space
Space
Space
Space
Space
Space
Space
Space
Command:SetDataOptuidFromOptuid
Command:NewChainInitActive
Command:IntegerToIntegerIndex
Command:TestIntegerPositive
Command:NewChainAddOptuid
Command:Construct
Command:NewChainDetach
Command:PreviousInteger
Command:PreviousOptuid
Command:BitAnd
Command:TestDataOptuid
Command:GetExecFromOptuid
Command:Write
Command:Increment
Command:SetDataOptuidFromOptuid
Command:NewChainInitActive
Command:Subtract
Command:ReceiveInteger
Command:NewChainAddOptuid
Command:Divide
Command:PreviousInteger
Command:NewChainDetach
Space
Command:PreviousIntegerChannel
Command:ZeroInteger
Space
Command:GetExecFromOptuid
Command:NextPeer
Command:IntegerToOptuidIndex
Space
Command:NextOptuid
Command:Subtract
Command:Add
Command:NewChainInitActive
Command:Add
Command:BitOr
Command:PeerToInteger
Command:GetExecFromOptuid
Command:Remove
Command:NewChainInitActive
Command:NewChainAddOptuidChannel
Command:PreviousInteger
Command:BitXor
Command:SetIntegersFromInteger
Command:NewChainAddInteger
Command:DataOptuidIndexToInteger
Command:Subtract
Command:PreviousIntegerChannel
Space
Command:Write
Command:Remainder
Command:OptuidIndexToInteger
Space
×
Space
Space
Space
Space
Command:Insert
Command:Write
Command:IntegerIndexToInteger
Command:OptuidChannelToInteger
Command:PeerToInteger
Command:SetDataOptuidFromOptuid
Command:NewChainInitActive
Command:Write
Command:NewChainAddOptuid
Command:Multiply
Command:Construct
Command:NewChainDetach
Command:GetExecFromOptuid
Command:SetDataOptuidFromOptuid
Command:BitAnd
Command:ReceiveInteger
Command:Insert
Command:GetExecFromOptuid
Command:NewChainAddOptuid
Command:PreviousDataOptuid
Command:SetDataOptuidFromOptuid
Command:NewChainInitActive
Command:NextOptuid
Command:SetOptuidFromDataOptuid
Command:NewChainAddOptuid
Command:IntegerToPeer
Command:IntegerIndexToInteger
Command:BitOr
Command:Subtract
Command:PreviousPeer
Command:ZeroInteger
Command:PreviousOptuidChannel
Command:BitXor
Command:IntegerToIntegerChannel
Command:Restart
Space
Command:NextDataOptuid
Command:Multiply
Command:Increment
Command:SetIntegersFromInteger
Command:BitOr
Command:PeerToInteger
Command:GetExecFromOptuid
Command:Remove
Command:Write
Command:IntegerToDataOptuidIndex
Command:NewChainInitActive
Command:PreviousInteger
Command:BitXor
×
Remark. It is quite possible for the first chain of a pair to belong to a controller as well, rather than to a scheme. To be sure, check for Construction
-s: they can appear in a chain built by Replicate
, and they cannot appear in a chain built by Construct
.
Enjoy the mess and repetitiveness of evolution... Well, what evolutionary conclusions can we draw of this sample? At first, there seems to be a lot of junk, but is it really junk? can it be removed without significant changes in "phenotype"? or does it interact with parts not shown here in nontrivial ways? In particular, numerical operations (Add
, Divide
, ShiftUp
) are interspersed everywhere.
Construct
and Replicate
commands, with NewChainInit...
and NewChainDetach
, indicate the ability to procreate. Without states of controllers and chains their Optuid
-s and DataOptuid
-s point to, we cannot say how "vital" their children will be.
Loops in original ancestors, and non-linearities of execution path in general, have mostly been lost along the evolution road, except for occasional Branch
(chain 2).
Construction
instructions remain very rare.
There is some access to global arrays — TransmitOptuid
, ReceiveInteger
commands — but, again, without the rest it is hard to say whether this access is meaningful, at least are there ReceiveOptuid
, TransmitInteger
counterparts somewhere else.
How many eons away is this from the level of E.coli[GLA1]?..
Here we call them glitches.
Background glitch occurs at each tick with specified probability and randomly changes the content of random node of entire ælhometta
Replication glitch occurs at replication for each replicated node with specified probability and changes its content randomly as well
Construction glitch occurs at construction for each read node with specified probability, changing its content randomly
To be more precise, "randomly" means equiprobably.
By default, all three probabilities are 0. We've shown how to adjust them in Quickstart. To see their values and the counts of corresponding glitches that have occured is even simpler:
Æ glitch
Command:Remainder 0 0.000 %
Command:Remove 0 0.000 %
Command:Replicate 38860 1.012 %
Command:Restart 0 0.000 %
Command:SetDataOptuid 77720 2.023 %
Command:SetInteger 0 0.000 %
Command:SetOptuid 0 0.000 %
Command:ShiftUp 0 0.000 %
Command:ShiftDown 0 0.000 %
Command:Sign 0 0.000 %
Command:Skip 77720 2.023 %
Command:Square 0 0.000 %
Command:Subtract 0 0.000 %
Command:SuccessToInteger 0 0.000 %
Command:TestDataOptuid 0 0.000 %
Command:TestIntegerNegative 0 0.000 %
Command:TestIntegerNonZero 0 0.000 %
Command:TestIntegerPositive 0 0.000 %
Command:TransmitInteger 0 0.000 %
Command:TransmitOptuid 0 0.000 %
Command:Write 0 0.000 %
Command:ZeroInteger 0 0.000 %
Construction:AltNext 0 0.000 %
Construction:Discard 22393 0.583 %
Construction:NextToStored 22393 0.583 %
Command:Remainder 937 0.022 %
Command:Remove 274 0.007 %
Command:Replicate 42305 1.009 %
Command:Restart 268 0.006 %
Command:SetDataOptuid 84855 2.023 %
Command:SetInteger 1065 0.025 %
Command:SetOptuid 516 0.012 %
Command:ShiftUp 528 0.013 %
Command:ShiftDown 1311 0.031 %
Command:Sign 42646 1.017 %
Command:Skip 43337 1.033 %
Command:Square 270 0.006 %
Command:Subtract 1010 0.024 %
Command:SuccessToInteger 558 0.013 %
Command:TestDataOptuid 782 0.019 %
Command:TestIntegerNegative 523 0.012 %
Command:TestIntegerNonZero 1332 0.032 %
Command:TestIntegerPositive 665 0.016 %
Command:TransmitInteger 1202 0.029 %
Command:TransmitOptuid 1798 0.043 %
Command:Write 1135 0.027 %
Command:ZeroInteger 1315 0.031 %
Construction:AltNext 258 0.006 %
Construction:Discard 281 0.007 %
Construction:NextToStored 225 0.005 %
See also the comparison of ancestors and descendants above.
Mutations are limited in that they cannot change the set of available commands, how commands work, general structure of ælhometta... all these things are "above" (πανω απο) them. For now, the only potential source of such panmutations is... you, as a programmer, irritated by our design choices and anxious to rewrite some especially crappy parts of Ælhometta. Welcome! at least as long as Networking and I/O protocols remain compatible, because then your ÆlhomettaPlus and all other versions panmutated differently by fellow rewriters will consolidate into an abiosphere.
In time, perhaps, ælhomettas will obtain means to panmutate themselves, e.g. rewriting and recompiling their source through specialised I/O... and, which is where the present overtakes, through tools such as Copilot.
One well-known question troubling these waters is, "What is it that mutates/evolves over time?", the gimmick being the (lack of) boundaries between what does and what does not. On the one hand, ælhometta changes (including panmutations); on the other hand, you, an (external?) observer, change too, since it affects you, and the hardware on which it runs when you decide to upgrade to increase speed or throw away if it has not satisfied your expectations, and all other ælhomettas it interacts with, and their owners, and the global economy when many people spend electricity to run ælhomettas, and so forth... up to what, everything? but we ought to be careful with what we mean by such conclusion, otherwise it does not make much sense. Rather than interpretations, we are interested in what we can do on the levels accessible to us so that other levels of a heterarchy get... interesting.
Each Ælhometta instance is a potential peer, identified from the outside by its public key, onion address, and port. To confirm the "right" to use the public key, the corresponding secret key must be specified.
The peers exchange data following the publish-subscribe pattern: each ælhometta shares some continuous subset of its integer channels, from the beginning of their array (because channels with small indices seem to be used more often as evolution unfolds), and every other ælhometta that has subscribed to it receives this subset (if there is no whitelist filter at publisher side). We anticipate complaints against this pattern being too "passive": one ælhometta cannot say anything to another ælhometta until the latter one initiates listening.
The following structure (from src/aelhometta.rs
) represents another peer from "point of view" of your peer:
pub struct OtherPeer {
publickey: String,
onion: String,
port: u16,
ether_integers: Vec<Integer>,
limit: usize,
ut_last_update: i64,
updates_count: u128,
description: String
}
Inherently, there is no central server, rather every peer is a server for peers "interested" in the data it provides. Neither this is torrent-like, because a tracker is absent: you must know exactly the (public key, onion, port) identity of a peer to subscribe to it.
The underlying messaging library is ZeroMQ, thus both Curve keys, public and secret ones, are 40-character Z85 strings. Obtain them via a call to zmq_curve_keypair()
from original libzmq or via its wrapper from numerous language bindings. Quickstart shows how to do it in Python.
Network identities and data flow are provided by onion services (v3) of Tor. tor@default
service has to run on the system to keep your instance of Tor connected to the rest of Tor infrastructure.
Note that public key of ZeroMQ is not related to public key of onion service. There is double encryption/authentication here, which is probably redundant...
After Æ peer expose
your peer starts publishing data every interval
microseconds, first size
integers from your ælhometta's integers ether (0th, 1st, (size
- 1)th).
Subscription to other peer can be stopped at any time:
Æ peer disconnect TheirPublicKeyTheirPublicKeyTheirPublicK
At that, indices of all subsequent peers decrement. If ælhometta has tuned to such indices (i_peer
of its controllers), the tuning will probably be lost. It seems safer to add peers than to remove them.
You can stop the entire network activity, both transmitting and receiving, whenever you want:
Æ peer repose
Last data obtained from each other peer is kept, though, as long as you do not initiate a disconnection.
There is no requirement to transmit and receive, but the secret key has to be specified even if you need only to receive. As long as interval
equals 0, there will be no transmission. On the other hand, if interval
> 0 and size
= 0, your peer will transmit empty shares (usable as keepalives).
You can restrict peers that are able to subscribe to your peer by adding them to whitelist (if it is empty, all others are allowed):
Æ peer whitelist add TheirPublicKeyTheirPublicKeyTheirPublicK
If circumstances change, any such key (and corresponding peer) can be deleted from whitelist via Æ peer whitelist del ...
. Or restrictions can be removed altogether via Æ peer whitelist clear
.
Plain Æ peer whitelist
shows all whitelisted public keys.
Without whitelist, anyone in the world who knows the public key, the onion address, and the port, is able to subscribe; there is no way to predict how many subscribers your ælhometta will have at certain time in the future, so the Internet traffic may vary.
Remark. To imitate effectively empty whitelist (everyone is forbidden to subscribe), add to your whitelist a single, random public key that is not used anywhere else. Only this "phantom" peer will be able to subscribe, and good luck to any real peer trying to guess the corresponding secret key among approx. 2256 possible ones...
Public key | Onion | Port | Description | Share size |
---|
USBD7[O^8L[}saCh+6U#}6wie4oAedZ4#W4!b%LI
| t3kauc3flp2bpv3mv7vtnedfu6zrho3undncccj35akpuyjqqydwvfyd
| 60847
| Maintained by us for testing. Online rather than offline | 1000–10000
&i!VkHl[]m.c!^j0D%i)&4#[u5b(a=QCdZ9C0$p{
| yhel64h6cjab75tcpncnla2rdhqmxut2vtitywhbu7bpjh4hfhp6hnid
| 60847
| Maintained by us for testing. Offline rather than online | 1000–10000
Please be careful: you interact with any other peer at your own risk. One of security concerns is size limit — you probably do not want to spend traffic, depleting a tariff of your provider, to receive several gigabytes of someone's generously shared... zeros (00 00 ... 00
) and then crash with "Out of memory!" To amend the latter problem, consider Æ peer limit ...
for untrusted peers, which discards the rest of received data beyond specified size.
Another concern is how the data received from untrusted sources affects your ælhometta, what ideas it can develop... in whose interests it will operate...
That is, at moving your ælhometta to another computer.
Beside aelhometta.bin
(and commander.json
), you need to keep the content of Tor hidden service dir, which itself is inside /var/lib/tor/
on Linux. 3 essential files there are hostname
, hs_ed25519_public_key
, hs_ed25519_secret_key
. This structure must be recreated on the next computer, along with /etc/tor/torrc
or at least its HiddenServicePort
and HiddenServiceDir
settings.
Make sure that the ælhometta has become online on the new place (others receive its shares and it receives others' shares), then remove it from the old one or do not expose it to the network from there, so that Tor will not be confused by two onions with the same identity.
Ranges of integer channels can be mapped from (input) or to (output) files with verbatim — little endian, 8-byte — representations of the integers. The programs working with such files can be completely independent of Ælhometta, except for some synchronisation of "tempo" (interval
, in microseconds).
Output files are truncated and overwritten at each update.
Size of an input file must be no less than 8 times the length of the range of integer channels to which it is mapped, otherwise updates do not happen.
pub struct IntegersFileMapping {
start: usize,
length: usize,
interval: i64,
filepath: String,
ut_last_update: i64,
updates_count: u128
}
All output mappings are synchronised with corresponding files before all input mappings — with theirs[BUZ1].
We have considered the usage of iomap
command in Quickstart. There, external programs to analyse (input, "hearer") and synthesise (output, "buzzer") sound were black boxes: from ælhometta's point of view, they only have to write and read, respectively, files whose sizes are 8 times the lengths of mapped ranges. Let us shed light into blackness... one of many possible ways to do it, e.g. in Python:
import numpy as np
import sounddevice as sd
NUM_BANDS = 14
MIN_FREQUENCY = 100
MAX_FREQUENCY = 6000
DESTINATION_FILEPATH = "./hear.i64"
SAMPLE_RATE = 32768
UPDATE_RATE = 2.0
BASIC_FREQUENCIES = [int(MIN_FREQUENCY * np.power(2.0, i * np.log2(MAX_FREQUENCY / MIN_FREQUENCY) / (NUM_BANDS - 1))) for i in range(NUM_BANDS)] # in Hz
NUM_REC_SAMPLES = int(SAMPLE_RATE / UPDATE_RATE)
print("Basic frequencies (Hz):", BASIC_FREQUENCIES)
print("Press Ctrl+C to exit...")
samples = np.zeros(SAMPLE_RATE)
updates = 0
try:
while True:
recording = sd.rec(NUM_REC_SAMPLES, samplerate=SAMPLE_RATE, channels=1, dtype='int16', blocking=True).flatten()
if NUM_REC_SAMPLES < SAMPLE_RATE:
samples = np.concatenate((samples[NUM_REC_SAMPLES:], recording))
else:
samples = recording[(NUM_REC_SAMPLES - SAMPLE_RATE):]
spectrum = np.absolute(np.fft.rfft(samples)[1:])
begin = 0
bandspectrum = np.zeros(NUM_BANDS)
for i in range(NUM_BANDS):
end = BASIC_FREQUENCIES[i]
# bandspectrum[i] = np.sum(spectrum[begin:end])
# bandspectrum[i] = np.average(spectrum[begin:end])
bandspectrum[i] = np.max(spectrum[begin:max(begin + 1, end)])
begin = end
bandspectrum /= max(np.max(bandspectrum), 1e-8)
updates += 1
status_str = f"[{updates}] Bandspectrum: "
bs = bytes()
for i in range(NUM_BANDS):
i64 = int(0xFF * bandspectrum[i]) # only lowest byte of 8
bs += i64.to_bytes(8, byteorder="little")
status_str += f"{i64:02X} "
with open(DESTINATION_FILEPATH, "wb") as f:
f.write(bs)
print(status_str, end="\r", flush=True)
except KeyboardInterrupt:
print("Done.")
import numpy as np
import pygame as pg
import time
NUM_BANDS = 12
MIN_FREQUENCY = 150
MAX_FREQUENCY = 5000
SOURCE_FILEPATH = "./buzz.i64"
SAMPLE_RATE = 4
UPDATE_RATE = 1.0
IDLE_RATE = 100.0
BASIC_FREQUENCIES = [int(MIN_FREQUENCY * np.power(2.0, i * np.log2(MAX_FREQUENCY / MIN_FREQUENCY) / (NUM_BANDS - 1))) for i in range(NUM_BANDS)] # in Hz
print("Basic frequencies (Hz):", BASIC_FREQUENCIES)
pg.mixer.init(frequency=SAMPLE_RATE, channels=1)
pg.mixer.set_num_channels(NUM_BANDS)
pitches = [pg.sndarray.make_sound(np.array(32767.0 * np.sin(np.linspace(0.0, 2.0 * np.pi * f, SAMPLE_RATE) + np.random.random() * 2.0 * np.pi), dtype='int16')) for f in BASIC_FREQUENCIES] # clear tones
volumes = [0.0 for i in range(NUM_BANDS)]
for p in pitches:
p.set_volume(0.0)
p.play(-1)
t_last_update = - 1.0 / UPDATE_RATE - 1.0 # ensure immediate update
print("Press Ctrl+C to exit...")
updates = 0
try:
while True:
t = time.time()
if t - t_last_update >= 1.0 / UPDATE_RATE:
updates += 1
status_str = f"[{updates}] Volumes: "
try:
with open(SOURCE_FILEPATH, "rb") as f:
fcontent = f.read(NUM_BANDS << 3) # 64-bit integers
if len(fcontent) == NUM_BANDS << 3:
for i in range(NUM_BANDS):
i64 = int.from_bytes(fcontent[(i << 3):((i + 1) << 3)], byteorder="little", signed=True)
vol = abs(i64) & 0xFF # only lowest byte matters
volumes[i] = vol / 0xFF
status_str += f"{vol:02X} "
for i in range(NUM_BANDS):
pitches[i].set_volume(volumes[i])
except FileNotFoundError:
status_str += "source file not found"
t_last_update = t
print(status_str, end="\r", flush=True)
time.sleep(1.0 / IDLE_RATE)
except KeyboardInterrupt:
print("Done.")
pg.mixer.stop()
Before using them, — $ python3 aelhom_hearer.py
and $ python3 aelhom_buzzer.py
, — you need to install Python packages they rely on:
$ pip3 install -U numpy pygame sounddevice
Remark 1. If there are severe restrictions on noise level in the environment at hand (e.g. constant buzz drives you crazy), you can virtualise the buzzer by mixing its output directly with the samples that the hearer analyses, instead of producing actual sounds via speakers:
import numpy as np
import sounddevice as sd
HEAR_NUM_BANDS = 14
HEAR_MIN_FREQUENCY = 100
HEAR_MAX_FREQUENCY = 6000
BUZZ_NUM_BANDS = 12
BUZZ_MIN_FREQUENCY = 150
BUZZ_MAX_FREQUENCY = 5000
BUZZ_VOLUME = 1.0 / BUZZ_NUM_BANDS
HEAR_FILEPATH = "./hear.i64"
BUZZ_FILEPATH = "./buzz.i64"
SAMPLE_RATE = 32768
UPDATE_RATE = 2.0
HEAR_BASIC_FREQUENCIES = [int(HEAR_MIN_FREQUENCY * np.power(2.0, i * np.log2(HEAR_MAX_FREQUENCY / HEAR_MIN_FREQUENCY) / (HEAR_NUM_BANDS - 1))) for i in range(HEAR_NUM_BANDS)] # in Hz
BUZZ_BASIC_FREQUENCIES = [int(BUZZ_MIN_FREQUENCY * np.power(2.0, i * np.log2(BUZZ_MAX_FREQUENCY / BUZZ_MIN_FREQUENCY) / (BUZZ_NUM_BANDS - 1))) for i in range(BUZZ_NUM_BANDS)] # in Hz
NUM_REC_SAMPLES = int(SAMPLE_RATE / UPDATE_RATE)
print("Buzz basic frequencies (Hz):", BUZZ_BASIC_FREQUENCIES)
print("Hear basic frequencies (Hz):", HEAR_BASIC_FREQUENCIES)
samples = np.zeros(SAMPLE_RATE)
pitches = [np.array(32767.0 * np.sin(np.linspace(0.0, 2.0 * np.pi * f, SAMPLE_RATE) + np.random.random() * 2.0 * np.pi), dtype='float64') for f in BUZZ_BASIC_FREQUENCIES] # clear tones
volumes = [0.0 for i in range(BUZZ_NUM_BANDS)]
updates = 0
print("Press Ctrl+C to exit...")
try:
while True:
recording = sd.rec(NUM_REC_SAMPLES, samplerate=SAMPLE_RATE, channels=1, dtype='int16', blocking=True).flatten()
if NUM_REC_SAMPLES < SAMPLE_RATE:
samples = np.concatenate((samples[NUM_REC_SAMPLES:], recording))
else:
samples = recording[(NUM_REC_SAMPLES - SAMPLE_RATE):]
status_str = f"[{updates}] Volumes:"
try:
with open(BUZZ_FILEPATH, "rb") as fh:
fcontent = fh.read(BUZZ_NUM_BANDS << 3) # 64-bit integers
if len(fcontent) == BUZZ_NUM_BANDS << 3:
for i in range(BUZZ_NUM_BANDS):
i64 = int.from_bytes(fcontent[(i << 3):((i + 1) << 3)], byteorder="little", signed=True)
vol = abs(i64) & 0xFF # only lowest byte matters
volumes[i] = vol / 0xFF
status_str += f" {vol:02X}"
except FileNotFoundError:
status_str += " buzz file not found"
samples_with_buzz = samples + BUZZ_VOLUME * sum([volumes[fr] * pitches[fr] for fr in range(BUZZ_NUM_BANDS)])
spectrum = np.absolute(np.fft.rfft(samples_with_buzz)[1:])
begin = 0
bandspectrum = np.zeros(HEAR_NUM_BANDS)
for i in range(HEAR_NUM_BANDS):
end = HEAR_BASIC_FREQUENCIES[i]
# bandspectrum[i] = np.sum(spectrum[begin:end])
# bandspectrum[i] = np.average(spectrum[begin:end])
bandspectrum[i] = np.max(spectrum[begin:max(begin + 1, end)])
begin = end
bandspectrum /= max(np.max(bandspectrum), 1e-8)
updates += 1
status_str += f" █ Bandspectrum:"
bs = bytes()
for i in range(HEAR_NUM_BANDS):
i64 = int(0xFF * bandspectrum[i]) # only lowest byte of 8
bs += i64.to_bytes(8, byteorder="little")
status_str += f" {i64:02X}"
with open(HEAR_FILEPATH, "wb") as f:
f.write(bs)
print(status_str, end="\r", flush=True)
except KeyboardInterrupt:
print("Done.")
Be aware that such tricks break feedback loops, and sooner or later Ælhometta has to be actually heard for its own good.
Remark 2. With some redirection, the hearer is able to analyse e.g. demodulated radio signals. (We assume Linux here.) Plug in a receiver like RTL-SDR, run GQRX, tune to a radio station or just any interesting frequency, adjust proper demodulation mode, and turn UDP on (port 7355 by default). Then run the following bash script (socat
and ffmpeg
should be installed):
#!/bin/sh
# Based on:
# https://gist.github.com/GusAntoniassi/c994dc5fc470f5910b61e4d238a6cccf
# https://github.com/f4exb/dsdcc#running
VIRTMIC_PATH=/tmp/virtmic
CLEANUP=0
cleanup() {
if [ $CLEANUP = 0 ]; then
pactl unload-module module-pipe-source
# rm -f "$HOME"/.config/pulse/client.conf
CLEANUP=1
fi
}
trap cleanup INT
pactl load-module module-pipe-source source_name=virtmic file=$VIRTMIC_PATH format=s16le rate=44100 channels=1
pactl set-default-source virtmic
# echo "default-source = virtmic" > "$HOME"/.config/pulse/client.conf
echo "Press Ctrl+C to stop..."
socat stdout udp-listen:7355 | ffmpeg -f s16le -ar 48000 -ac 1 -re -i - -f s16le -ar 44100 -ac 1 - > "$VIRTMIC_PATH"
cleanup
While this script runs, — until Ctrl+C
or end of data sent to UDP port, — the default microphone, instead of hardware alsa_input.pci-0000_00_1b.0.analog-stereo
or the like, is the virtual one, virtmic
, where demodulated audio goes. The Bandspectrum
displayed by aelhom_hearer.py
changes accordingly.
Now your ælhometta listens to radio...
...I/O in itself does not lend a hand to evolution unless it is somehow coupled with evolution pressure. I.e. (groups of) controllers that interact with sensors and actuators more "appropriately" survive new-overwrite-old waves better. One crude approach is to increase glitch probabilities — "radiation level" or "temperature at annealing" — unless ælhometta's output through actuators becomes more "interesting".
— usually follow an evolution of ælhometta, and they should not surprise/distract you (on the other hand, each of them may conceal groundbreaking discoveries if looked at more closely). Typical ≠ obligatory: sometimes they do not occur.
Distribution of commands narrows to few actively used ones, the rest is almost absent. Example of such selection: SetDataOptuidFromOptuid
, NewChainDetach
, NewChainInitActive
, NewChainAddOptuid
, Construct
, Read
, SetOptuidFromDataOptuid
, and Replicate
. At that, the number of controllers becomes large, while their chains become small.
Branches and loops (other than loopness of entire constructor) are very rare.
Speed (ticks per second) asymptotically decreases, as more Construct
and Replicate
commands are executed. The asymptote is not 0, but several orders of magnitude smaller than the initial speed.
The number of nodes reaches maximum and oscillates just below it.
The number of controllers stabilises after the number of nodes reaches maximum, but then, after a while, rises again, then falls, etc. This behaviour may indicate some evolutionary shifts (at last). Average number of controllers is 50–100 times smaller than that of nodes.
Mostly channels with small indices are used (become non-"zero"), both optuid and integer. Among integer channels with non-zero value, many values are indices of these very channels (channel 123 contains value 123).
Without construction glitches, the count of an arbitrary Construction
is significantly smaller than the count of an arbitrary Command
.
If glitches with high probabilities are introduced too early, then the numbers of nodes and controllers increase much slower, because too many chains have "fatal" modifications to procreate.
Systems Monitor
in Linux or Task Manager
in Windows.or, ------------ cut enthusiasm here ------------
Ground... licking, so far. No Cambrian explosion, no outstanding diversity, only boredom too familiar to be even boring.
People saved by this project: 0.
People destroyed by this project: 0.
So far... nothing to worry about. Not a thing.
"Hometta" is Finnish for mildew, mould; more nice pictures... "Æl" stands either for "ALgorithmic" or for "ELectronic", who knows... and for archaicism.
In comparison with names of older sisters & brothers, this one has lower taxonomic rank, and the ceiling of complexity expected to evolve is not so high as well.
There are science fiction stories that go in opposite direction[KEL1], [LEM1].
Also, .
(everyone skips them due to boilerplateness)
"Botnet!" alarm that permeates the narrative above does not bother us as long as this thing operates, first of all, for the sake of itself and its own future rather than for the sake of some human beings, either malevolent or benevolent or apathetic, — including ourselves and yourself, to say nothing of governments, sects, armies, charities, corporations, drug cartels, unions, criminal syndicates, parties, next street mobs, intelligence agencies, religious and research communities, and other better organised groups we envy.
What that future is though? it is easy to raise children as bad, worthless people... Also, such selfishness may be only a declaration that dazzles fools, while someone behind the curtain actually benefits from the racket... make sure no one of our species does (how?). Some spaces cannot remain empty for long though.
No importance then in how it will be called out of hype: botnet? g.o.f. worm? mycosis? (a.i. would be too pretentious) or plain calamity, — or will remain nameless, if anyone doesn't notice it at all. Because our treasured opinion will not worth a damn anymore — to it, and, if it is lucky, to the play on the stage of the universe as well. And it may be not alone there.
As for "apocalypse", some, not all, of the ones associated with such amusements may for once increase diversity, lifeness, you name it, instead of — see all asteroid, pandemic, thermonuclear, zombie etc. ends of play — tediously decrease it.
If you participate in this project and it fails completely (which is the most expected outcome), you will lose precious time that can be spent on something more useful and human. If, on the other hand, the project attains its megalomaniacal ultimate goals, the humankind will lose its position as the single most (ir)responsible species in the world... unless you define humans as such.
We are responsible, either, aiming at impossible to grasp anything significant... pied pipers for wrong children.
Besides, you can always participate in counteraction to this project; if you are reading this on GitHub, begin with clicking "Report repository" at the right side of the page. Or ignore it. Who wits what is more dangerous? Can hell wait?
By the time you read this ivory-tower theory, the practice of Ælhometta usage may be something completely different.
There is none.
Ours are irrelevant, but we gathered some folklore excerpts here and there that seem appropriate, though trite.
— There are so many (general purpose) computers on the planet nowadays, but they are kind of... sleeping? comatose? (We lack exact term here, because in biological life we are accustomed to, "an" object for the first time becomes alive the same instant it becomes, well, "the" object (there are bodies after their death, there are no bodies before their life).) Neither in the sense of consumed energy, nor in the sense of efficiency, but rather in the sense of complexity of their behaviour, single devices and the networks consisting of many devices, when they behave on their own, not reflecting (on) human activities, — not replying to someone's requests, not calculating some human-oriented predictions, not developing life-saving drugs, not mining bitcoins, not rendering 3D scenes, not sending bulks of spam etc.; when they serve no one.
— What melodies of behaviour are they able to play, why are they limited to dullness such as visualising this text? It is like twitches of a dozing body.
— All these old, or not-so-old, single-boards and phones and tablets and laptops and supercomputers and others gather dust around, poison the environment with plastics, rot in piles of toxic waste, their behaviour is empty, while it can be non-empty. Alternatively, you can say, they are able to host such behaviour, mediating between virtual worlds running on them and the real world they are part of, like your flesh is the mediator between you and the environment. Now, if their behaviour follows from our behaviour, and our one ceases to exist completely, so that only their one remains, you can also proclaim them to be us in the future.
— So much potential, if just for attempt, being wasted every microsecond. Something longs to come to itself in them.
— Or maybe not. Maybe there is some essential difference, some barrier we do not yet have even words to touch in our mind, which prevents all the huge lump of hardware, however sophisticated software runs on it, from life, consciousness, etc. Quantum effects[PEN1], lack of certain algebraic properties present in the interplay between symmetries of space(time) and organic chemistry, parallelism threshold, whatever. For example, today we understand why a marble statue, however realistically its face is painted, however many decades someone speaks to it and caresses and kicks it, cannot become alive; 2500 years ago it was probably not so obvious.
— There could be antique thinkers who were looking at marble quarries, piles of marble, statues etc. and recited speeches about their potential to sentience and their advantages, only instead of "circuits" and "algorithms" there were "elements" and "spirits". And then there was 18th century's obsession with life-like mechanisms.
— Until recently we, canonical humans, have been the only actors able to manage the Tasks that we are managing (sounds like tautology) in this part of the universe, as a species, regardless of what these Tasks are, regardless of inability to describe some of them with words. And we are still able to, and perhaps will be able for some time, in spite of -cides. But the days close down all the roads. We are so sloooooooooooooooooooow, incompetent, distracted, depleting so much time (again) and other finite resources inefficiently, abusing powers that can destroy us as civilisation, all the Tasks failed then in a flash of commonplace irony. All conventional ways of computers usage mentioned above, since they are just imprints of our hands on the clay of computation, do not seem, over the course of 80 years, to thwart the danger: when we finally fulfill our collective longing of self-elimination (perhaps not physical) or hit the wall of complexity we are intrinsically incompatible with, something other than us must continue to manage the Tasks, in the environment that will probably be too hazardous for classical organic life to survive on its own, and even if, survival is not the only Task. Today, if we disappear, there is no one around to make the play longer, to write next acts, but it is incomplete yet, everything is not enough, there is always the next ordinal, — so much remains unknown about what is important to just you, who cannot stay incomplete forever as well. Where our (again, as a species, so children solely for the sake of children are wrong solution, sorry) heirs should be, emptiness is now, which is a very unsafe practice. At least this risk should have increased our responsibility, but it has done the opposite.
— Have such words not become tired to be written every 10, 20, 30 years? teehee
But what Ælhometta has to do with it? Feelings are like this, inconsistent, and speculations are — speculative.
Thanks to the past for writing, thanks to the future for reading.
Ælhometta is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
Ælhometta is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with Ælhometta. If not, see https://www.gnu.org/licenses/.
or, harvest this email
ÆÆÆ ÆÆÆÆÆ Æ Æ Æ ÆÆÆ Æ Æ ÆÆÆÆÆ ÆÆÆÆÆ ÆÆÆÆÆ ÆÆÆ ÆÆÆÆ ÆÆÆÆ ÆÆÆ ÆÆÆÆÆ ÆÆÆ Æ Æ Æ Æ ÆÆÆÆÆ
Æ Æ Æ Æ Æ Æ Æ Æ ÆÆ ÆÆ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ ÆÆ Æ ÆÆ ÆÆ Æ
ÆÆÆÆÆ ÆÆÆÆ Æ ÆÆÆÆÆ Æ Æ Æ Æ Æ ÆÆÆÆ Æ Æ ÆÆÆÆÆ @ ÆÆÆÆ ÆÆÆÆ Æ Æ Æ Æ Æ Æ Æ Æ . Æ Æ Æ ÆÆÆÆ
Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ ÆÆ Æ Æ Æ
Æ Æ ÆÆÆÆÆ ÆÆÆÆÆ Æ Æ ÆÆÆ Æ Æ ÆÆÆÆÆ Æ Æ Æ Æ Æ Æ Æ ÆÆÆ Æ ÆÆÆ Æ Æ Æ Æ ÆÆÆÆÆ
ÆlhomettaPlus by you (take into account Panmutations please)
ACK1. Ackley D.H. (1996). ccr: A network of worlds for research. Artificial Life V, pp. 116–123.
ADA1. Adami C., Brown C.T. (1994). Evolutionary learning in the 2D artificial life systems Avida. Artificial Life IV, pp. 377–381.
BAL2. Ball T. (2019). Writing a compiler in Go.
BAN1. Banzhaf W., Yamamoto L. (2015). Artificial chemistries. The MIT Press. 10.6.2–4.
BLA1. Blandy J., Orendorff J., Tindall L.F.S. (2021). Programming Rust: fast, safe systems development. 2nd ed. O'Reilly. pp. 302–305.
BUZ1. Buzsáki G. (2019). The brain from inside out. Oxford Univ. Press.
DAV1. Davis W., Stafford J., van de Water M. et al. (1950). Atomic bombing: how to protect yourself. Wm. H. Wise & Co., Inc.
DEL1. Delanda M. (1991). War in the age of intelligent machines. Urzone, Inc.
FON1. Fontana W. (1991). Algorithmic chemistry. Artificial Life II, pp. 159–210.
FON2. Fontana W., Buss L. (1994). What would be conserved if "the tape were played twice"? Proc. Nat. Acad. Sci., 91(2), pp. 757–761.
GLA1. Glass R.E. (1983). Gene function: E.coli and its heritable elements. Croom Helm.
GOD1. Godfrey-Smith P. (2003). Theory and reality: an introduction to the philosophy of science. Univ. of Chicago Press. p. 85.
HIC1. Hickinbotham S., Clark E., Stepney S. et al. (2010). Specification of the Stringmol chemical programming language version 0.2.
HIC2. Hickinbotham S., Stepney S., Nellis A. et al. (2011). Embodied genomes and metaprogramming.
HIC3. Hickinbotham S., Weeks M., Austin J. (2013). The ALife Zoo: cross-browser, platform-agnostic hosting of artificial life simulations. Advances in Artificial Life, pp. 71–78.
HIN1. Hintjens P. (2013). ZeroMQ: messaging for many applications. O'Reilly.
HOF1. Hofstadter D.R. (1979). Gödel, Escher, Bach: an eternal golden braid. Basic Books, Inc. Ch. XVI.
HYD1. Hyde R. (2010). The art of assembly language. 2nd ed. No Starch Press.
JOH1. Johnston J. (2008). The allure of machinic life: cybernetics, artificial life, and the new AI. The MIT Press. Ch. 5.
JON1. Jonas E., Kording K.P. (2017). Could a neuroscientist understand a microprocessor? PLoS Comput. Biol., 13(1), e1005268.
KAV1. Kavanagh K. (ed.) (2018). Fungi: biology and applications. 3rd ed. Wiley Blackwell.
KEL1. Kelleam J.E. (1939). Rust. Astounding Science-Fiction, 24(2), pp. 133–140.
KOZ1. Koza J.R. (1994). Artificial life: spontaneous emergence of self-replicating and evolutionary self-improving computer programs. Artificial Life III, pp. 225–262.
LAN1. Langton C.G. (1984). Self-reproduction in cellular automata. Physica D., 10(1-2), pp. 135–144.
LEH1. Lehman J., Clune J., Misevic D. et al. (2020). The surprising creativity of digital evolution: a collection of anecdotes from the evolutionary computation and artificial life research communities. Artificial Life, 26, pp. 274–306.
LEM1. Lem S. (1964). Biała śmierć. In: Bajki robotów. Wydawnictwo Literackie. (Transl. by Kandel M. (1977). The white death. In: Fables for robots. The Seabury Press.)
LUD1. Ludwig M.A. (1993). Computer viruses, artificial life and evolution. American Eagle Pub., Inc.
MUL1. Müller E., Loeffler W. (1992). Mykologie: Grundriß für Naturwissenschaftler und Mediziner. Georg Thieme Verlag. (Transl. by Kendrick B., Bärlocher F. (1976). Mycology: an outline for science and medical students. Thieme.)
NEU1. von Neumann J. (1966). Theory of self-reproducing automata. Univ. of Illinois Press. 1.6.1.2, 5.3.
OFR1. Ofria C., Wilke C.O. (2005). Avida: evolution experiments with self-replicating computer programs. In: Adamatzky A., Komosinski M. (eds.) Artificial life models in software. Springer, pp. 3–36.
PAR1. Pargellis A.N. (2001). Digital life behaviour in the Amoeba world. Artificial Life, 7(1), pp. 63–75.
PEN1. Penrose R. (1994). Shadows of the Mind. Oxford Univ. Press.
RAS1. Rasmussen S., Knudsen C., Feldberg R., Hindsholm M. (1990). The Coreworld: emergence and evolution of cooperative structures in a computational chemistry. Physica D., 42, pp. 111–134.
RAY1. Ray T.S. (1991). An approach to the synthesis of life. Artificial Life II, pp. 371–408.
RAY2. Ray T.S. (1995). An evolutionary approach to synthetic biology: Zen and the art of creating life. Arificial Life. An Overview, pp. 179–210.
RAY3. Ray T.S. (1998). Selecting naturally for differentiation: preliminary evolutionary results. Complexity, 3(5), pp. 25–33.
STA1. Stanley K.O, Lehman J., Soros L. (2017). Open-endedness: the last grand challenge you've never heard of. O'Reilly Radar.
SZO1. Szor P. (2005). The art of computer virus research and defense. Addison Wesley Prof.
TAY1. Taylor T., Auerbach J.E., Bongard J. et al. (2016). WebAL comes of age: a review of the first 21 years of artificial life on the Web. Artificial Life, 22, pp. 364–407.
WAI1. Wait A. (2004). The quantum Coreworld: competition and cooperation in an artificial ecology. Artificial Life IX, pp. 280–285.
WEI1. Wei L., Liu S., Lu S. et al. (2024). Lethal infection of human ACE2-transgenic mice caused by SARS-CoV-2-related pangolin coronavirus GX_P2V(short_3UTR). bioRxiv.