My Research

My research investigates the mechanisms involved in AI models of language, which are described in large language models (LLMs) with those models implemented by artificial neural networks.  A network-based understanding of language is prominent in Kuhn’s later work, which described some of the mechanisms of interaction with a conceptual web that an artificial neural network would need to perform.  In the process of researching that topic, I worked in an artificial intelligence research lab, resulting in a peer-reviewed publication in AI Magazine for which I was the lead author, discussing the methodological structure of AI research. Wittgenstein’s discussion of family resemblance style definitions point to the need for a vector-based semantic mapping, describing the need for cluster-based analysis, but doesn’t clearly describe the ways in which nodes of the network are typically related to one another.  In order to fill that intellectual gap, I have focused on the analytic Buddhist philosophy of the Abhidharma period, which utilized a connectionist model while introducing process-based ontology and describing how different elements of thought are interdependent on one another for their meaning. I have recently submitted a book chapter for peer review, which summarizes some work I did in my dissertation making comparisons between large language models and Buddhist philosophy of mind in the Abhidharma period. I am also doing some research into the neurology related to making multimodal comparisons related to understanding paradigms.