We have Plugmatic which greatly simplifies installing and upgrading published plugins. We ask here, how can we extend this convenience to new plugin development.
See Mixed Content where we motivate our use of plugins.
See Plugin Catalog where management thinking began.
# Types
A wiki page is filled with items that are rendered by plugins retrieved by name based on the type field of each item.
Item types are drawn from a flat namespace of single words like 'paragraph' or 'morseteacher'. We might camel-case these names in documentation but only lowercase ascii letters, a-z, are used in types.
Item types extend json, a format which claims to be without revisions. We similarly ignore any notion of revision for plugins at the level of json where we aspire for longevity. We expect every plugin to be tolerant of markup it doesn't understand even if it only renders raw markup and a plea for help. We presume the Journal will provide future historians enough clues to read our writing.
# Versions
Two threads join at the origin server. One maintains that types are flat and simple, the other recognizes the complexity of building and installing software.
Type --> Fetch --> Origin --> Emit Build --> Publish --> Install --> Origin
We've chosen to install plugins with npm. Given this we have several choices as to where in the build thread diversity is collapsed into simplicity.
Install step could choose a version to be installed. The command line npm offers many variations. Only selecting by version number is supported through plugmatic installs.
Origin serving step could choose from multiply installed versions in parallel namespaces. This is awkward because in many places the code assumes a choice has already been made.
Publish step could create more choices to be selected from later in the thread. Plugmatic expected metadata from npm, but this could be done without when publishing to github or other git repo. The complexity is then in Plugmatic.
Publish step could be faked with a proxy in front of npm that substitutes experimental code without the downstream install realizing the difference.
We've considered all of these approaches and find no clear winner. We explain the problem in this detail here hoping that yet another alternative might surface.
.
Much of what I've been working on in federated wiki for the past few years falls in the Build -> Publish -> Install branch in that graph, and mostly with docker. Still working on a process to include experimental plugins in my docker compositions.
This kind of farm, which is also a Laptop Server, is configured so one person owns all of the subdomains. It is intended to be 1) a relatively easy way to start your own local wiki farm, and 2) a helpful example of how to configure various components. We share configurations for Docker and docker-compose to offer more detailed conversation.
Most of my early docker experiments were searching for different parts of the puzzle that grew up to be Local Farm. github
• There's a git service described in the README for the container-terminal experiments. I imagined that thing acting as a local git repo to self-host the wiki code and my (future) experimental plugins
• There's a DNS service which has been replaced by localtest.me in recent experiments. yak shaving at github
• In the Yak Shaving example there are early attmpts about providing TLS with hashicorp's vault. Those are replaced with a Caddy reverse proxy using self-signed certs in the Local Farm previously mentioned.