CES 2016: From the Fringes

The Consumer Electronics Show in Las Vegas is the premier trade show for tech product previews and release announcements, going as far back as the VCR in 1970 to Driverless Cars in 2013*. This year the CES featured about 3800 exhibitors, spanning 2.47 million sq. ft. spread out over 3 locations** visited by 170,000 media and industry professionals — and I was privileged to count myself amongst them. Featuring keynote addresses from Intel, Netflix, IBM, Samsung, nVidia, Volkswagen and other big names, a lot has been written, presented and shared on mainstream as well as social media about the 4 day event. This chart sums it up the hype pretty well:

Source: BuzzRadar.com

Source: BuzzRadar, CTA

I decided to share some of my views from the fringes, rather than the trenches — there is no point in rinsing and repeating what is already out there, nor do I have any delusions about the value of my personal opinion about tech that enables your car to count how many oranges are left in your fridge (yes, it was demoed, with voice control).

Oculus_c

The Oculus Rift demo was by far the hardest to get into — there was a line, a line to get in the line, and a third holding area. Eventually I made it on the last day, and it took me about 20 min to recover from the simulator sickness caused by piloting EVE Valkyrie’s spacecraft from a living room chair. I still felt there were rough edges and the HTC Vive was by far a more refined, immersive and truly flawless experience. The new Sony PlayStation 4 VR was quite impressive as well: I could lean out of a moving car and look behind me, and the granularity of control was so good I could rotate knobs on the car stereo. OSVR.org based devices were quite popular too, and some others that caught my eye were Virtuix Omni active VR platform, AntVR’s Holodeck concept and ICAROS‘ EUR 10,000 gym equipment that lets you fly around in a virtual world powered by your own body. Certainly beats playing first person shooters wearing VR googles on a treadmill, or riding a virtual horse on a exercycle.

There were tons of clones (mostly based on Gear VR) and drones. Augmented Reality seems to be gaining ground, but despite solutions like the Sony SmartEyeglass and Daqri Smart Helmet, VR seems to be more popular of the two. It’s worth noting that virtually every VR or AR demo was running on Unity3D content, including those at NASA and IEEE’s booths.

I also tried my hand at racing simulators of various scales: from small VR setups, to actual cars mounted on motion platforms, to a massive 4×4 grid of 55″ OLEDs in front of a force feedback seat rig. There were several interesting display technologies on show: 3D without glasses, transparent (scaling up to entire walls), curved and Samsung’s modular, edge-blending display tech straight out of a sci-fi movie. Avegant’s Glyph might turn the display industry on it’s head, though, much like the way it’s worn.

SamsungModular_c

 

On the automotive side, voice, gesture and intent based user interfaces seem to be gaining ground. Also making an appearance were adaptive user interfaces and improvements in sensor fusion, self-learning and self-driving techniques. There were tons of wearables, 3D printing and home automation booths. The two core themes seemed to be a maturing of the ecosystem (just about everything built on top of something else, not too many technologies solving problems from scratch) and apps for doing things that don’t need apps, like locking your front door. You’d think we would stop there, but no:

On the social innovation side, I found GrandPad, Casio’s 2.5D printing and the Genworth R70i Aging Experience very thoughtful. Besides these, I liked Mixfader‘s idea of an MVP slider for mobile DJs: after all, the crossfader is the main thing that requires precise tactile control, everything else can be relegated to the screen. Also impressive was Sony’s line of 409,600 ISO see-completely-in-the-dark cameras. And this is now a thing:

LifeSpaceUX_c

You’d also probably be able to find a lot of beautiful photos of Las Vegas on the Internet, so let me leave you with this video of a not-so-common Las Vegas activity that I squeezed in on the last day, courtesy of DreamRacing.com (very fringe-y because I picked a Nissan over a Ferrari). Thanks for reading!

* Apple, Google and Microsoft have their own tech events and despite the Xbox (2001) and Android devices (2010) being unveiled at CES, these companies tend to keep their product announcements exclusive to their own events. So no Hololens at CES.

** „Tech East (Las Vegas Convention Center), Tech West (Sands/Expo at the Venetian,  The Palazzo, Wynn and Encore) and Tech South (Aria and Vdara)

Advertisements

Why Driverless Cars?

Jetsons

Image Courtesy: Ludie Cochrane, Flickr

In case you’re late to the party, it all started with the DARPA Grand Challenge more than 10 years ago. Then Google got serious about it. Then Audi. And Audi on a racetrack. How could BMW be left behind? Then Tesla brought autopilot to it’s cars. And Apple decided to improve the experience of accelerated transportation. While Volvo decided to converge the phone and the car. And others attempt to use smartphones to augment the automobile’s current capabilities.

Meanwhile, other advancements were happening, too. Inertial navigation from Jemba. Shape-shifting cars from BMW. Augmented Reality in cars. And in Supercars. The convergence of video games and race cars, which started with instrument cluster design in the Nissan GT-R. Analytics in race cars. Infrastructural innovations, outside the car itself. Convoys of robot trucks in war zones. And some others that didn’t do so well.

There is also talk of remotely driven cars from Ford. Although clearly the poor level of software quality we have come to terms with on our desktops needs to be addressed before we scale it up to our cars. Maybe open source cars will come to the rescue.

Impact on jobs and economy aside, there are some fundamental problems with this whole “autonomous vehicle” thinking. Allow me to point out the elephant behind the SUV in the room:

  • #1: Cleaner, greener energy is a much more serious problem to solve than eliminating monotony, fatigue and accidents
  • #2: The 21st century problem is not with drivers or cars, it is with the infrastructure
  • #3: Driverless cars still need to be sturdy enough to support human life
    • It makes sense to make aircraft unmanned (whether for war, commercial or humanitarian purposes), because an aircraft without a human being in it can be lighter, cheaper and more efficient (it doesn’t need windows and safety equipment, for example). Contrary to this, passenger cars, regardless of whether they are driverless or not, will always need to carry the same level of safety and comfort equipment (maybe even more) as they do now. So making a car driverless in a sense just makes it more expensive.
    • Already, about half of the cost of a modern car is software, lest you be tempted into thinking that adding more software will make cars cheaper
  • #4: A chain is only as strong as its weakest link
    • As long as software is written by humans, there can and will be disastrous and potentially fatal mistakes. Not too long ago, the good folks who write software that puts shuttles in space acknowledged that we are still in the “hunter-gatherer stage” of software development. Although we have made advancements in leaps, there is still a lot we don’t yet fully understand.
    • Airline pilots have been dealing with a dangerous phenomenon known as automation dependence for years. It’s a problem we don’t have on our roads… yet.
  • #5: We don’t really need cars any more
    • At least not in our cities. Even though we’re driving less, there are more of us driving. And more of us driving even bigger cars than ever before. But what are the use cases?
      • Commuting to and from work? At approximately the same time every day? With tens of thousands of other people like you? To approximately the same physical location? Sounds like a classic opportunity for optimization. How about jumping into a more efficient mass transit vehicle with all the others?
      • Public transport not clean or safe enough? It would cost a fraction of the money to improve and optimize our public transport systems, compared to what it would cost to spam the planet with billions of driverless cars
      • Going shopping? Running errands? Consume less, walk (or bicycle) more and support your local grocery shop 🙂
      • Going out after work? You can’t drink and drive, anyway!
    • The only reason we need cars any more are for inter-city travel, emergencies, last mile connectivity and motor sports (because no one wants to see a bunch of algorithms racing each other). We need to fix our infrastructure and our way of life, not our cars.
    • Peak Car. It’s a thing.
  • #6: We are reinventing the wheel. Literally.
    • Given the problem statement that driverless car manufacturers have set out to solve, wouldn’t it make more sense to make cars that can fly themselves, rather than be constrained to the limits of 2 dimensional roads? After all, that’s already a solved problem.

Update 2015-05-25: More food for thought: Self-Driving Trucks Are Going to Hit Us Like a Human-Driven Truck

Q: Whatever Happened to Virtual Reality? A: Augmented Reality

Let me kick off these sections on Augmented Reality & Technology Trends with a mention of Christopher Mims’ recent post over at [MIT] TechnologyReview.com:

So what was it, really, that kept us from getting to Virtual Reality?

For one thing, we moved the goal posts – now it’s all about augmented reality, in which the virtual is laid over the real. Now you have a whole new set of problems

It will be worth your time reading his entire analysis of the question. Meanwhile, interesting things have been happening in the Augmented Reality (AR) field. First, it was Pranav Mistry from the MIT Media Lab demonstrating a cheap, adaptable and wearable gestural interface, aptly called SixthSense. Watch the TED Video from November 2009 here.

Then in 2010, marketers started experimenting with adding AR layers into the real world, making use of software such as the Layar platform for mobile phones. You can watch a convincing demo here. I say convincing because mobile phones are the most likely candidate for widespread adoption of AR technologies. Because if they were not, we would have more devices like the Wrap 920AR from Vuzix, which are basically wraparound goggles that provide an immersive AR experience.

The Video Game industry is always quick to adopt new interfaces (and often invents them),  and AR is no exception. Motus from the University of Abertay Dundee uses a Sixense TrueMotion controller (I couldn’t find any relation to SixthSense) to manipulate a virtual camera in a virtual environment, with applications in gaming, animation and simulation. It was originally inspired by the Simul-Cam that finally enabled James Cameron’s 20-year old dream to come to life, in the form of the 2009 movie Avatar. Ironic, a movie about remotely controlled humanoids, filmed using AR cameras.

Speaking of Avatar, not only was the technology behind it futuristic, but so was the marketing, which used an i-TAG system that allowed the manipulation of an on-screen 3D model, using a Webcam that scanned interactions with a tag in the physical world. Also, the same idea of controlling remote representative agents is explored in a different form in the 2009 movie, Surrogates, which IMHO was more “epic” than Avatar.

So the next question is: When will we reach the point when we won’t be able to tell the difference between what is “augmented” and what is “real”? Or would we rather not be able to?