Skip to main content

Machine Vision…and Squirrels?

Mechanically, the human visual system is not unique. We surpassed our own abilities long ago with inventions that catapulted us into areas of the light spectrum that most living creatures can’t reach. The photo lens, essentially a glass eye, is pretty well perfected.

But human perception is another matter. The 15 centimeters that lay behind the eye – the brain – is a massive net of neurons that process focused light into image, and image into meaning. That last phase, the part where blobs become quantifiable bits, ripe for the flipping has yet to be simulated. At least not to the level we expect, right?

You’d be surprised.

Wait, so machines can “see”?

No. To “see” is a huge set of tasks revolving around the ability to know, to understand, to learn, and to decide. To “see” is a precursor to reaction and oneof many sensors that allow us to monitor and audit our behavior. What machines have, currently, is vision – a consequence of sight plus interpretation. The way objects are identified in different lighting conditions, from different angles, with different permutations of size, color, and shape is the task of vison.

To illustrate, consider the “Eureka Machine” from Cornell University. After a few hours staring at the movement of a double pendulum, a motion that is extremely chaotic and difficult to predict, the Eureka machine managed to conclude: Fma, better known as Newton’s second law of motion. It took less than a day to make an inference that had massive impacts on our understanding of the world around us.

What happened next was a little less surprising: nothing. The machine went right on whirring and clicking, unaware of itself or the meaning of its conclusion. Later on, the machine would produce equations to explain phenomena that scientists couldn’t even understand. The machine, like a ditch digger, had done its job and unearthed a new artifact. It was done.

So, machine vision is like AI?

I’m going to use the word synergy here: Synergy.

Many consider machine vision to be a subset of the study of artificial intelligence, but truthfully the two fields could exist agnostic of one another. The areas of interest though overlap significantly, so the two are often found synergistically feeding into each other as part of a single application.

Take my new favorite app for example: Blippar. Blippar is just a machine vision demo at first blush, but the more curious user might stumble upon some very interesting features. One of these is that it recognizes and identifies plants quite well. With no more interaction than to point your phone at a potted plant, Blippar spots, names, and even describes the plant to you (through a set of suggested links and factoids) all in real time. Besides being scary accurate, Blippar is a great case-in-point study of how machine vision and AI work together.

But, how does that help me?

Blippar won’t deduce the laws of motion for you, not that you’d have much use for that, but it does give you some exciting ideas. These fresh areas of study are rife with opportunities for application way outside the industry of tech. AirBnB’s creative team just released a blog post on how they used machine learning (another subset of AI) and machine vision to generate code from sketches. Literally, they sketched out wireframes for a web template, showed it to their machine, and it coded it for them in real time. Watch the video, it’s pretty cool.

Far and away my favorite example of machine vision improving life on Earth is the example of Kurt Grandis. He’s the machine learning expert who created a water cannon turret to blast away squirrels in his backyard. Check it out (demo video at around 16:00)

You can even play with a neural network – the simulated brain of a machine with vision – in your own browser. Like, right now: https://goo.gl/xJXCmB

For more on AI and all its applications (and dangers), our expert AI speakers have you covered:

Leave a Reply

Your email address will not be published. Required fields are marked *