Black Tree AutoML
Humanity’s Fastest Deep Learning Software
Charles Davi, Founder
Email: charles [dot] cd [dot] davi [at] gmail [dot] com

About Me
I am a mathematician that worked in financial services for eight years, most recently at BlackRock, spending a significant portion of my free time conducting research in information theory. I spent the last five years conducting this research full-time, and the last two years coding and writing full-time. In addition to my scientific writing, I’ve also published in The Atlantic, and elsewhere, writing about banking, finance, and economics, which was widely cited by bank regulators, and other legal and financial professionals, including Judge Richard Posner.
I received my J.D. from New York University School of Law, and my B.A. in Computer Science from Hunter College, City University of New York.
About My Work
I’ve reduced machine learning and deep learning to a set of algorithms that are so fast, they can run on any consumer device, with runtimes that are a small fraction of comparable techniques. I’ve also rewritten all of special relativity using objective time, and developed a novel and unified theory of gravity, charge, and magnetism.
Formal working papers are available on my ResearchGate homepage.
For a high level summary of my work in A.I., read my paper, Vectorized Deep Learning.
Theoretical Foundations
All of my work in physics and artificial intelligence follows almost entirely from the works of Alan Turing and Claude Shannon.
My model of physics treats reality itself as a computational engine, and in particular, treats elementary particles as combinatorial objects. I show that, remarkably, Einstein’s equations for time-dilation follow, despite the fact that my model has absolutely no superficial connection or similarity to relativity. In short, I’ve developed an entirely new model of physics that is closer to Newton’s idea of a mechanical universe by making use of contemporary theories of information and computation.
My model of artificial intelligence imitates tasks accomplished by machine learning and deep learning algorithms, but my model is radically more efficient than any other algorithms that I’m aware of: all of my algorithms have a low-degree polynomial runtime, allowing them to accomplish extremely high-dimensional, sophisticated tasks such as 3D object classification, projectile path prediction, and image classification, quickly and accurately on ordinary, cheap consumer devices.
The fundamental observation that underlies my model of AI is that the complexity of an object depends upon the level of granularity that we use to observe the object. If we take a very detailed view of an object, its complexity will be high, whereas if we take a less detailed, “impressionistic” view of an object, its complexity will be low.
This simple, common sense observation is remarkably useful. Specifically, my algorithms search for a local optimum level of complexity in between these two extremes, which I’ve found to be the point at which the actual structure of an object comes into focus. This allows, for example, my algorithms to categorize a dataset, or partition an image, with no prior information at all, simply by iterating through different levels of granularity until it finds the optimum level of complexity that reveals the actual structure of the data, or the image.
This simple initial procedure allows a core set of three algorithms (image partition, clustering, and prediction) to accomplish nearly everything that can be done in AI, with simple “plug-ins” that address the particular tasks at hand.
Resume and Selected Papers
Information, Knowledge, and Uncertainty
Follow Black Tree on LinkedIn
For updates on new releases and academic papers