Scroll to top
© 2020, Norebro theme by Colabrio, All right reserved.

How Does NexOptic’s Artificial Intelligence Work?

Rhys Hanak - January 31, 2018

In just a handful of years, neural networks have become one of the world’s most promising new technologies—even being compared to the invention of electricity in terms of societal impact. But it wasn’t always this way.

Despite its transformational potential, neural networks were shunned by artificial intelligence (AI) academia for decades.

Perhaps it was the fear of failure.

After all, the “AI Winter” of 1974 (a period of time in which governments significantly reduced their funding for AI-related projects due to waning public interest) had destroyed the careers of many a researcher. The risk of tarnishing an already funding starved field of study may have been deemed too great.

Or perhaps it was simply a shared disbelief in the concept. Nevertheless, neural network technology limped on, eventually paving the way for innovations in healthcare, fintech, and of course—imaging.

Our imaging artificial intelligence technology consists of convolutional neural networks that learn to enhance low-light performance, image stabilization, and high-dynamic range (HDR) across extreme lighting situations, all in real-time.

We believe that this digital technology—in combination with our Blade Optics™ lens designs—could define the imaging systems of tomorrow.

What Are Neural Networks?

A neural network is a grouping of interconnected machine learning algorithms that work to process complex data sets. As the outputs of this processing pass from layer to layer (or algorithm to algorithm), the neural network learns to improve the efficacy of its processing according to the “rewards” or “punishments” it receives.

It’s important to note that we anthropomorphize neural networks a bit when we talk about rewards and punishments.

To computers, rewards and punishments are merely man-made labels: in actuality, we are only adding an integer (eg. +1) or taking away an integer (eg. -1) to the program. The programmer defines what is good (a behaviour that adds 1) and what is bad (a behaviour that subtracts 1).

Again, computers have no emotions. The reference to rewards and punishments simply provides an easy-to-understand analogy for how neural networks learn—a topic that we will dive into greater detail below.

Supervised Learning: NexOptic’s AI Technology

Neural networks tend to learn through 3 primary methods: supervised, unsupervised, and reinforcement learning.

NexOptic’s imaging artificial intelligence technology uses supervised learning. This is because we provide our algorithm with explicit examples of what the algorithm should strive to create, also known as the “target”.

Let’s take a look at an example of how we use supervised learning.

First, we give our proprietary neural network a dark image (the input) and the corresponding bright image (the target).

Then, we train our neural network technology to transform the image from dark to light. This training is made possible through the previously explained concept of rewards and punishments.

An example of our AI technology transforming a photo from dark to bright. A video of our AI in action can be viewed here.

Data pairs (which are comprised of an input and a target) make up the bulk of supervised training.

In unsupervised learning, the algorithm is not provided with any explicit examples of what the algorithm should create. Unsupervised learning still uses inputs and targets, but the resulting data is made entirely from the input without human defined “targets”.

What Is Reinforcement Learning?

The other popular method of training neural networks is through reinforcement learning (RL).

Reinforcement learning doesn’t use any {input, target} pairs.

In RL, the algorithm interacts with its environment and in turn occasionally receives rewards. Its task is to simply maximize the reward it receives.

A classic example of reinforcement learning would be solving a maze.

In this scenario, the algorithm receives a little reward for successfully getting to the end of the maze. The challenging part for the algorithm is not only figuring out what exactly it did right to get to the end of the maze, but learning to repeat those same correct choices in the future to get to the end of the maze (aka the reward) faster.

Deep Neural Network vs. Neural Network

Today, “deep neural network” and “neural network” effectively refer to the same thing.

But back in the day, the first neural networks were rather limited—both in size, and in complexity.

To illustrate, the neural networks of yesteryear usually had 1 input, 1 layer of processing, and 1 output.

Researchers eventually found that they could stack multiple layers of processing to make the neural network perform better. Unfortunately, doing so also made the neural networks harder to train.

Breakthroughs in computation, training algorithms, and the availability of big data made training networks with multiple layers much easier. We went from single layer networks, to several layer networks (deep networks), to several hundred layer networks (still called deep networks) in a matter of a few years.

Whereas old neural networks had a few layers, today’s networks can have hundreds.

Enhancing Imaging Experiences Through Artificial Intelligence

Artificial intelligence holds the potential to impact all aspects of human life.

In fact, artificial intelligence is already a core part of some of the world’s most popular products—from Google Maps to Spotify.

We believe that in a similar way, our imaging artificial intelligence technology could become an integral part of popular imaging products, be it consumer imaging devices like DoubleTake, surveillance systems, drones, and much more.

Rhys Hanak

When I’m not sharing NexOptic’s story with the world, you can find me in the mountains hiking or out on a run.