Introduction

Imagine your eyes wearing lenses the size of your eyeballs, being able to magnify and focus on anything you see in front of you, or maybe in front of a guy located miles away. Say it in a single word: “future.”

This technology would have applications far beyond what we have yet to think about. Gamers would be excited to get a 10x scope built into their eyes. But there is more to it than this.

Compressed sensing is a process that allows the user to see additional information beyond the traditional human experience. A further fascinating possibility would be to design probes that can detect and record subtle details in light patterns. But how can one make this eye-tracking software in the first place? And, how could it give you a ‘super-human’ vision?

We’ve all heard of some people being color blind and how the condition impedes their color vision, but how many of us have heard of tetrachromacy? This is a condition affecting your color vision, which enables you to see around 100 million shades of color. The average person with standard color vision sees approximately 1 million different hues, while somebody who is color blind may see as few as 10,000 colors.

Simply put by artist Concetta Antico, “I see colors in other colors… other people might just see white light, but I see orange and yellow and pink and green… so white is not white; white is all varieties of white”.

That was about colors.

What about the vision depth?

Speaking of depth, the average person has a visual acuity of 20/20. A score of 20/5 means you can see things at 20 feet that most people can’t see until they are standing 5 feet away. This type of visual acuity is akin to an eagle’s vision.

There have been reports of an Aborigine man who had 20/5 vision. Despite this, researchers believe this level of vision is not possible in humans.

Bald eagles can have 20/4 or 20/5 vision, meaning they can see four or five times farther than the average person.

Researchers speculate the reason for this is because of their hunting style. They are adapted to hunting from a stationary position. So their eyes require less movement and they can therefore be more detailed in their vision.

Nature has, on the other hand, gifted humans with intelligence and a tendency to hack limits. So, how about a 20/1 shot?

Some of you might think this is way off the mark. Maybe it’s difficult to imagine this new technology, and maybe it’s just an impossible dream. Maybe it could be science fiction. Everyone (scientists included) considers their new technology an impossible dream.

Let’s go straight to the point now that we know what we are talking about.

Eye-Lens Software + Hardware

The Eye-lenses would be just like a smartphone camera that possessed a 100x zooming ability. And the software, the most important part of the whole process, will be able to control the lenses, adjust the image, and snap an image from your eyes. Yes, besides enhanced color perception, zoom-in ability, and snapping photos.

Once the software is done (it’s a long process yet to be discussed), it will be connected to a processing unit that will process the image, and isolate the parts of the image that are relevant to you. The software would then render this information into a visual experience for you, just like watching TV or reading a book on your phone.

When you look at something with your eyes, you are shooting photon rays into your brain in the form of electromagnetic waves (light). The brain interprets these light patterns as what you are seeing—either formulating an image or not seeing anything at all.

The Hardware Part

As mentioned earlier, we already have smartphone cameras that possess a 100x zooming ability. So, there is no concern about the ability to zoom in.

Speaking of enhanced color perception, you will need to trick your brain to see colors in depth. Just like the tetrachromacy example we mentioned above, to see white #2837834 instead of seeing just white, your brain comes into play. In this case, the hardware comes into play.

The hardware would create a 3D image for you and manipulate the light waves in your brain to make you see different colors and shapes from that single white light. But nature won’t make your brain do so naturally, just like you can not stream a 4K video in 4K on a 1080p monitor.

The “tricking your brain” part may be the biggest challenge in using this technology. Right now, we don’t have a way to do this naturally. Fortunately, there are ways.

AI-Powered Hardware

The hardware would be able to enhance your vision and have AI computers not waste your time by showing you irrelevant patterns in a scene.

It would only show you relevant patterns, such as people, text, or food, not heat maps, images, or anything irrelevant. No fake news here, and no ads either. You will only be presented with information that is relevant to you.

But how would you know if something is relevant? How could hardware process your images and make a visual interpretation?

As we said before, your brain would process light patterns from objects around you, and there is no way for the hardware to know what parts of those patterns are important for you to see. Everything else will be dismissed and not shown at all. In other words, it’s like watching the world through a sleeping person’s eyes (images or whatever). What you see would be different than what they see because of an optical illusion when they close their eyes.

“Super-Human” Vision

The software, as mentioned earlier, will play the most crucial role in this whole process. Additionally, the software would be able to control the lenses, manipulate the colors, and snap an image from your eyes for you to get a near 100x zooming ability.

The software would only be able to delete irrelevant patterns and enhance the ones relevant to you instead of making them look like something else or drawing new objects from scratch. That would be way too hard for a mere software program. With the current pace of development, it may take a long time before we have such an AI-powered software program.

Additional features of the software include its ability to absorb information about things around you. Then, for every pattern it identifies, it will create a corresponding “snapshot” of your visual brain. The software would then take these snapshots and create a 3D scene from them through computer algorithms. And while this is going on, the software would be able to identify patterns. We know this as real-time pattern recognition (RTR).

After this, the software will have an estimate or idea of what the user’s eyes see in an image. We know the brain for doing the same. The software would act more like a flash drive than an external hard drive for our actual brains. It’s because they would not be connected to our bodies by any means except the actual eye lenses.

Mojo Vision‘s AR contact lenses were hyped a lot in early 2020, but they never came to fruition.

Even if not in the near future, you will be able to have a digital copy of your eyes. From this, you will be able to see everything in 3D in less than a couple of decades. You will be able to see things in different colors as we talked about above. And it will be a whole new era of visual perception for all of us.

texting messages is harmful for the brain Previous post How Constantly Texting Affects the Structure of our Brain
Next post COVID is Slowly Destroying Our Brains – Research
Show Buttons
Hide Buttons