8 Gadgets from the Near Future That I NEED Now!

Once again, we were tricked to believe that the future we were promised, was here now. This time Lexus presented us with a “Hover-board” video (more like a smart way to do advertising). Therefore, in response, I have made a list of 10 things I wish were actually out right now.

  1. The freaking Hover-board. Come’on guys, enough with the teasers already, let’s all get to work to come up with the real deal.
  2. Jet packs. Although this is much closer to being a reality, they still are not there yet. I’m very jealous of these 2 guys.
  3. Self-driving cars. Once again, close, but no cigar. I’m not sure how the alcohol industry hasn’t pushed harder for this one! Google, BMW, and some other brands are very close to this, but I think we still have 10 more years in the making before this is an actual reality.
  4. VR games. I’m getting very inpatient here people. We have Oculus and the Xbox kinect. How come no awesome VR games are out there for these two things?
  5. Robots. I want my own personal Bender. Well, maybe not a drunk robot, but one that will clean and cook for me.
  6. Teleportation. Only that one guy that turned into a fly cracked it. Unfortunately, he turned into a fly. So it seems like long airplane rides will continue to rule for the next 50 years to come.
  7. Underwater breathing. I’ve seen some buzz about products that require no tanks to breath underwater, but nothing solid yet.
  8. Nanobots. This one may just be the solution to the last 7 points. The only thing we will have to do is buy a billion of them, upload our memories and consciousness, and BAM! Shape yourself to whatever you want.

DeepMind and Machine learning

Google has been on the cutting edge of A.I development for some time and recently had a major advancement. Google’s DeepMind research center, consisting of a team of machine learning researchers, computational neuroscientists, and software engineers based in London, recently crafted a program that, without feeding any information to it from a human brain, learned how to play Atari games.

Machine learning has been ground breaking since its development- the ability to teach technology how to operate and learn. Educating machines has been akin to that of an infant; we have “raised” them to adopt our language, and learn from patterned experiences. Inputting information into machines over a long period of time enables the machine to pick up on patterns to eventually be able to execute on its own. But the “on its own” part has, up until recently, only been possible with human help – programing the machine with information in the effort for the technology to create relevant algorithms. However, Google’s DeepMind labs has crafted a type of machine learning program that does not require excessive inputted data from humans to effectively learn.

How is this possible?

Combining machine learning and systems neuroscience to produce general algorithms, DeepMind’s A.I. Program, coined “Deep Q-Network” (DQN), recently taught itself  how to play arcade games on its own.  Not only did DQN teach itself, the program was also able to  established farsighted strategies. It was reported that the A.I agent was capable to function right out of the box, given only “raw screen pixels, the set of actions available, and game score.”

DQN works from a combination of deep neural network and reinforcement learning – through seeking to find the familiarity and balance between the unknown, and that which is known. Reinforcement learning, inspired by behaviorist psychology, is unlike most supervised learning in that it does not correct weak actions or define the right input or output pairs. Rather, it allows software and machines to find the best context specific behavior automatically, to exploit its performance. It is trained in a way similar to how our brain experience and realize dreams and flashbacks – trained from stored samples of information taken directly from the learning phase. DQN is the attempt to create a supercomputer, a program that replicates the human brain.

Future of Machine Learning

DeepMind was initiated out of concerns over the future dangers of AI. It prioritizes the importance of human nuances, which are believed to be too complex for technology to ever understand entirely. However, the evolution of machine learning currently serves as an excellent asset to numerous fields.

A.I. has proven beneficial for certain diagnostics, especially in the medical world, where opinions and outcomes drawn from a large pool of continuously updated, world-wide data has been a great tool for providing the most relevant up-to-date information to patients. It generates global statistics from analyzed data that can then referenced to support a patient’s diagnosis. Perhaps machine learning will eventually be replacing the scientific method.

What lies ahead for machine learning is still unknown.  Perhaps leading machines will be replacing the researcher – learning as they go, coming out with the most relevant information that further helps in an endless amount of fields (stocks, economics, academia, health, etc..). Providing not only data, but data along with solutions. However, the emotional decision making and illogical tendencies of human choices is still too abstract to be captured in a program. The more advanced machine learning  that researchers are looking to develop would require complex algorithms and memory span that is, at the moment, unimaginable.

A Future Where Reality Is Not So Real

Lately, or not so lately, there has been a great amount of buzz around Virtual Reality. However, I would rather not talk about the tech itself -since I’m sure there are plenty of articles out there about that- but rather, I would like to discuss the philosophical side that this form of tech may be bringing into question in decades to come.

In this post, I would like to introduce another buzzword that has been in the tech world for quite some time now. What that word is you may wonder? If you guessed Singularity, you were right. For the people out there reading and seeing this term for the first time, allow me to give you some quick background on this term.


For a long time, this term has been used by mathematicians and physicists. However, more recently in history, Singularity made its way into the tech industry. It is often described as a point in time when technology will exceed human intellectual capacity and control, thus radically changing civilization. In a much simpler way to explain it, Singularity is the point in the future where human life as we know it will be changed beyond possible imagination.

If you have heard of this before, then you know that Singularity mostly revolves around A.I. (Artificial Intelligence). There are many predictions regarding when machines will take control over humanity. In fact, there is a great documentary from the 80’s on this subject. And to clarify, by documentary I mean movie, and by movie a mean Terminator .But let’s not get too distracted here because, if I remember correctly, this post is about V.R., not A.I.

As someone that works at Ruckbau, a company that specializes in Machine learning and path finding software, I’m pretty confident to say, although sorry to disappoint, that machines will not be taking over the world anytime soon. At least for the “first Singularity”. I see more chances of V.R. being the first tipping point in society. Oh, and what is the name of that other late 90’s documentary? You know, the one with Mr. Anderson? Oh yea! The Matrix. Possibly the worst human nightmare as far as technology goes -V.R and A.I combined together. But once again, lets not talk about A.I. and just focus on V.R.

A future where reality is not so real:

We are approaching the foreboding era of  having the possibility of implanting chips directly into our brains. Something close to giving your brain the right stimulus to trick you into feeling and seeing things that do not physically exist. With that being said, I don’t want to get into what may or may not be a possibility, but rather discuss what is already a “reality”.

The Oculus Rift headset

Last year I purchased an Oculus Rift headset (second generation), and I got to experience how V.R. really works. Even though the tech is “not there yet”, it is very obvious that it will become more and more refined in the years to come, eventually allowing a full submersive experience. This doesn’t mean you will be able to touch and feel things; however you will be able to stand and walk around a room that can be transformed into infinite possibilities.

Here is the part where the philosophical aspect of the post comes in. It order to describe it best,  I will refer to a very simple thought experiment, I like to call- “The Ocean experience”.

To  start the thought experiment, we first need to set up a “real” environment. So here we go:

It is a beautiful Saturday afternoon during the summertime. You are provided with a backyard,  a beach chair (think- the kind that fully reclines and allows you to even sleep on it if you want to), of course a bucket full of your favorite beer, and your V.R headset. You are also provided with NO time or money to travel to those amazing caribbean beaches that you want to explore so bad.

Now, lets turn your real setup into a “not ‘real’ but very real experience”. You place your bucket full of beer next to your beach chair and now all you have to do is sit, recline and put your V.R. headset on. A few seconds later you are transported to a beautiful, very relaxing caribbean beach.

Are you there? Well, the first answer that comes to mind when you think about it is, no. You are in the middle of your backyard. In spite of your first instincts, I would like you to put more thought into the question and reconsider. Think about this, you are now a few beers deep, and you can feel the heat of that summer sun on your skin. You look around and all you see is ocean, sand, some people nearby and maybe the occasionally seagull. So I ask you again, are you there? Once more, I’m sure you want to answer no. But isn’t it really, at least to some extent (here is where the real debate starts), nothing less than what your brain is able to perceive and process? So how can you tell, until the moment that nature calls and you are in need to take your V.R. set off your head and run to the bathroom, you weren’t “there”.

Chip Implant…Reality?

What’s going to happen when/if that chip implant becomes a “reality”? Then you could create the right stimulus to trick your body to feel the sand in between your toes. You could tell your brain that the smell of grass is actually the smell of ocean.

About a year ago, when I bought my V.R. set, I had an argument with my dad about reality. He kept saying that no matter how “real” it feels, it’s not “reality”. This is where I disagree.

So I would like to ask you this question: If you feel that sand between your toes, you smell the ocean  breeze, and you see the ocean -does it matter where you physically are?


Forget about machines taking over humanity. “Smart Machines” created by humans, at least at this point in time, are actually not so ‘Smart’. And it will take another 40 or 50 years before they are somewhat “Smart”. However, I do see a future where you’ll be choosing your own “reality”.

All I can do at this point is just sit here and ponder about the infinite possibilities…

Fede Pisani

Technical Blogger