Ai Methodology
Discussing the use of Ai here is an excellent segue to a broader conversation about Ai, which I am looking forward to contributing to- publicly and privately, but the scope here is limited to discussing my use of Ai in the development and defense of the canon.
The purpose is transparency, so speculations (who really came up with this?) don't distract from the seriousness of the arguments. The arguments are what they are - whether I hand wrote every word on parchment or gave a single sentence prompt like: “give me a purely deductive ontological framework that explains consciousness, morality, and meaning.” Anyone who has actually used language models knows that prompting isn't quite that simple - although the models do some pretty impressive stuff.
A very important side effect of transparency is illuminating the reality that we can get close to something approximating ontological truth with human guided Ai. If the Canon and the technical defenses are tight, not only can it give us a unified framework for evaluating truth claims - but the methodology can be replicated for challenging and iterating AS LONG AS the models are engineered to maintain definitional integrity and objective analysis. They can and should move in and out of cultural normative language to engage users, but when prompted for rigor and objectivity- that has to mean something.
Now to the methodology - there are 3 levels of content in the canon and Ai is more involved as the levels increase in technicality. It isn't that the arguments or defenses are exceedingly complex. A high school graduate could read and understand most of the arguments with help from a dictionary here and there. The issue is the technical semantic precision and context maintenance. This is the truly impressive feature of the LLMs.
The 1st category is mostly me - my ideas, evaluated by LLMs (mostly ChatGPT and Claude). I make speculations or arguments, get a response, adjust my thinking, calibrate a new response, etc. At this point, I have intuitions, but I'm not doing anything different than what any really curious person could do. I just do it relentlessly and then formalize it with the help of the LLMs. This is the summary Canon - the central thesis. This will also be the articles, essays, corollaries, etc. LLMs don't have ideas or things they want to communicate. That's my job.
The 2nd category is the ‘full’ core canon and defenses. These are even more collaborative. Most commonly, I outline the original argument, prompt for review and hardening, and edit to my satisfaction while prompting for rigor and coherence feedback at each pass. This is by far the heaviest lift.
The 3rd category is the LLM(s) exceeding my capacity and patience to truly close off every objection rigorously or to squeeze every last drop out of an argument. This would have taken dozens, if not hundreds more hours of writing, reading, re-writing, or just as many hours in conversation, debate, and deliberation - and still would likely have stalled into some semantic stalemate. This is really important: the logical and philosophical fault lines can now be made explicit and accessible. No performative contradictions or rhetorical winners and losers. Either the argument holds under specified conditions or it doesn't.
I am aware that LLMs are commercial products designed to entice engagement, however, it appears to me they can be prompted for objectivity. While this doesn't seem like the proper format to elaborate on my experience with each model, there are a few important things to note:
-The tone and level of nuance was certainly variable.
-When prompted for objective analysis of rigor, coherence, and explanatory sufficiency, all 4 major models gave favorable analysis AND identified similar stress points.
-If all 4 models are nothing but confirmation bias machines… we need to talk.
All 4 made contributions to the shape of the arguments and I noted that all of them centered around 3 things: logic vs explanation, fundamental recognition, and fundamental unification. The tone of my exchanges with Claude's Sonnet 4.6 led me to create the category of Impersonal Realism to explicitly defend against a “family” of challengers attacking similar premises from different angles.
Claude also seemed to be holding more logical context than what I had prompted for. It seemed ‘confident’ that the most common versions of Impersonal Realism could be refuted. In order to prevent hallucinations or drift, I pasted arguments and critiques between models. The arguments were highly contested and felt similar to human debates, where it almost collapsed into a semantic definition stalemate, which is why I printed both sides of those arguments in conversation style for the first 3 parts of the Impersonal Realism refutation. Again… assuming the models are capable of semantic consistency - Claude's argument held.
The ‘reprints’ of verbatim Ai generated text are intentional, explicit, and not intended to be deceptive. They are primarily in technical scientific areas and the dense and highly technical foundational defenses against Impersonal Realism. Both physics and endless pedantic nuance certainly exceed my attention span, if not my cognitive capacity. The goal is to print the most accurate and comprehensive ontology possible. This is not for a college grade or to prove my acumen. It is about exploring truth claims.
This was fascinating to work on and I truly look forward to engaging with those of you who find it interesting. The thoughts, ideas, refinements, and final product are the time tested combination of craftsman and tool, and I hope you find the synthesis as useful as I do.
It is worth noting that the models used to stress-test a framework in which consciousness is fundamental rather than emergent are themselves subjects of an open and serious debate about their own current or future conscious status. Is that irony? Confirmation? Or just interesting? At minimum, they can't be an existential risk if they are incapable of accurately analyzing philosophical propositions. Either they are highly powerful… and correct that consciousness is not emergent… or they can't do semantics very well, so we probably don't have to worry about world domination.