The Merger of AI and Blockchain: Two Great Hypes that Hype Great Together

robot hand touching cubes

In the modern tech world, there are two buzzwords to end all buzzwords: Blockchain and AI. Both have been individually heralded as a major step forward for the species, and a panacea that will someday allow us to fix virtually every problem in the modern world -- and if you do a Google search for ideas to combine the two, the claims get even more exaggerated. But beyond the hype, there is very real potential for the melding of blockchain and AI to widen the impact of both technologies, and to quicken the pace of world-changing new technological releases. To an entrepreneur on the prowl for the hottest new trend to exploit, the meeting of blockchain and AI certainly sounds like one of the most potentially profitable ideas around.

To understand why, we first need some very basic conceptual background on both blockchain and AI itself. We at Vanbex have been putting together explainers on blockchain for some time now, which we hope you’ll check out. What follows is the less widely-explained side of the blockchain-AI equation: a quick and dirty explanation of what AI is, how it works, and why it’s inherently well suited to work in concert with a decentralized platform like a blockchain. After that, we’ll go into why it could be such a major revolution, if it does, and some of the most exciting research happening right now.

Once you have a working understanding of the dynamics of the space, you will be in a decent position to evaluate new claims about it, knowing just where the pain points lie. That way, you can actually do something to fix them -- and reap the benefits, as a result.

What Is Modern Artificial Intelligence?

In the modern sense, artificial intelligence has very little to do with intelligent computers, but rather with the implementation of a totally new computing paradigm that was first discovered in nature: the neural network. Nature’s neural networks are called “biological neural networks,” or “brains,” while modern AI neural networks are called “artificial neural networks” or “ANNs.” ANNs are pieces of software that try to process information by mimicking the processing strategy in the brain. In parlance, an artificial neural network that does useful work is generally referred to as an “AI,” though it’s no closer to consciousness than any other type of software program.

AI Chart

The useful work that an AI does can be almost anything that boils down to a series of operations on data, from optimizing delivery routes to finding cats in pictures to quickly identifying the words in spoken language. AI solutions can now sift incoming spatial data in real-time to safely direct an autonomous car through traffic, or do mass-scale facial recognition to find wanted individuals amid footage of bustling city centers. In other words, they can do all sorts of things that previously only the human brain could do.

AI developers have proven time and again that their approach to software development, called machine learning, can solve problems that have confounded traditional computer programmers for decades. It can also solve problems that the programmers who implemented that approach could not ever have solved, themselves. It’s an incredible revolution in software engineering, but what is it that specifically makes machine learning so powerful?

How AI Works

The big difference between the way a digital computer and a neural network compute information is on a low structural level. Traditional computers combine millions or billions of binary transistors to form a single, extremely complex computing unit. All of these transistors must work together according to the march of a highly controlled, centralized beat -- the “clock cycle.” This means that these transistors all work together to execute a single, unifying base of programming, and a change to the behavior of any one transistor requires editing of the exact same code as any other transistor. They all take direction from the same authority, meaning that that authority must know precisely what state to tell every single transistor to be in at every single instant -- if it doesn’t the whole computer ceases the function.

As a result of the the sheer complexity of this meta-code, even a small coding change will cause huge problems in execution unless that change is chosen with extreme precision. This means that if a coder can’t specifically predict the coding change that will cause the effect they desire, then neither the coder nor the computer can achieve that effect. Coders can only tell classical computers to execute solutions -- not to solve problems. In traditional programming, solutions are still the sole responsibility of human employees, while computers simply put those already-discovered solutions into practice.

On the other hand, neural networks are systems of extremely simple computing units (“neurons” or “nodes”) that each work independently of one another, using their own simple programming and executing that programming according to their own timeline. It’s a bit of a conceptual shift, to think of our biological neurons as computers, but in a processing sense that’s what they are -- they take inputs, modify that input in some predetermined way, then pass the input on to one of a selection of later neurons for further processing. In this sort of design, the behavior of the network is an emergent result of the individualized programming of each component neuron, and of the relative strengths of the links (“synapses”) between them. There is no central authority like a clock cycle, and thus no one monolithic place to make changes to the overall behavior of the system.

What this means is that the programming of any one neuron (node) in a neural network is extremely simple. Just like a biological neuron, each simply takes an input in some form, performs a simple operation on that input, and then passes that piece of processed information on to one of several possible neurons that could be next in the sequence. When the programming that controls neurons is this simple and self-contained, it becomes possible to make small semi-random changes to the programming of individual neurons on the fly without necessarily causing the entire network to cease functioning entirely. With a neural network, it’s possible to make a very specific modification without having that modification affect the programming of any portion of the network other than the one you directly modified.

In machine learning, we generally begin every task with a useless network, a web of interconnected neurons, each with unique semi-random programming and each with semi-random chances to pass inputs along to any of a variety of neurons next in the sequence. AI development is about rapidly testing millions upon millions of tiny changes to such a blank-slate-ANN’s programming, and classifying each such small-scale change as either generally helpful or generally not, with respect to some predefined goal. When we take these true/false run trial results and use them to modify the network so “true” results happen more often and “false” ones less often, we are doing a process called “machine learning.”

Put differently, machine learning is the act of taking many semi-random trial runs and determining whether each run was good or bad at solving a particular problem. Better runs lead to very slight increases in the probability that the changes associated with that run will be maintained in future versions of the neural network, while worse runs result in a slight decrease in the same. Over enough such trials, perhaps a few million attempts, machine learning acts as a roughly evolutionary process, slowly nudging the network toward an optimal structure for one particular type of operation. Machine learning modifies the programming of individual neurons as well as the rules by which those neurons choose the next neuron to pass information along to.

By changing both processes and the order of processes, machine learning can evolve a neural network with a talent at virtually anything. As mentioned above, this means that if we can apply a machine learning approach to a problem, we can use a computer to actually solve that problem, as opposed to what computers have been able to do up until now: implement solutions. With digital computers, programmers can only tell computers to execute very specific commands, meaning that those programmers need to know which very specific commands to execute, themselves. That means they have to have already solved the problem, conceptually,

before they can get a computer to implement that solution. On the other hand, a neural network can be made to learn from trial runs -- in a very real way, from experience -- and develop new solutions that the neural network’s creators could never have come up with, themselves.

There are things that programmers can do, like understand spoken language, without necessarily knowing how they do it, or which sequence of neurons fire in which order to produce that effect. Machine learning can produce those sorts of abilities, making it an incredibly potent approach to software development. However, think this trial-and-error model of machine learning all the way through to the end and you’ll quickly realize that there’s a problem: how do we know whether a particular trial is an error? There are three answers, and all three are also the main reasons that AI and blockchain are a match made in heaven. They are: Data, data, and more data.

AI Blockchain

AI Feeds on Data

The most crucial part of a machine learning setup is not the neural network being modified, nor is it the machine learning algorithm doing the modification -- it’s the dataset being used to falsify trials.

If we want a neural network to get better at, say, identifying cats in photographs, we can’t just give it a bunch of photos of both cats and non-cats for it to guess at, since it would have no way to know whether any particular guess was correct or incorrect. In other words, the process of machine learning requires access to a dataset with an associated layer of metadata; in our example, a human being would have to go through and define whether or not each photograph does indeed contains a cat. This creates a base-level truth for each picture that allows the neural network to classify a particular trial cat-guess as correct or incorrect, and weight the associated structural changes accordingly.

AI Truths and the Dataset

What this means is that AI development is totally dependent on both the size of a dataset (the number of trials we can do) and the quality of that dataset’s truth-defining metadata. AI development works by finding hidden patterns in existing actions, in this case the human action of finding cats in pictures, so that it can repeat those patterns (and actions) later. When humans show a neural network how to find cats in pictures by doing it themselves (several million times), machine learning can slowly edit that neural network’s structure so that it, too, is good as finding cats in pictures. An AI with access to more and better-quality data will be able to learn more quickly and effectively.

And the blockchain is all about access to data.

There are basically three main ways that AI and blockchain could be integrated. The first two are already ongoing: Using blockchain to furnish AI with data, and using AI to furnish the blockchain with insight. The third is by far the furthest out, but it’s also the one that could realize both the utopian and apocalyptic dreams of science fiction: Blockchain as AI.


Author: Graham Templeton

If you’d like to stay connected with us, be sure to follow us on Facebook, Twitter, and LinkedIn for the latest blockchain guides and updates from Vanbex.