Dairy giant Land O’Lakes recently made history.
It hired a Silicon Valley startup to deliver 20 tons of butter across America by truck.
The 2,800 mile drive from California to Philadelphia took over 40 hours.
And here’s the kicker… the 18-wheeler drove itself the whole way!
A safety driver sat along for the ride, but he didn’t have to put his hands on the wheel once.
This was the first time a self-driving semi delivered goods for a paying customer.
The camera crew travelled to a warehouse in the heart of Arizona’s Sonoran Desert.
Inside sat a fleet of 40 new, shiny self-driving 18-wheelers. Those big rigs belong to TuSimple, a robotruck startup valued at $1 billion.
Delivery giant UPS recently hired TuSimple to haul parcels from Phoenix to its delivery hubs across Arizona and Texas. It’s now running over 20 trips per week for UPS.
I don’t blame them. We’ve been hearing about robocars for decades now.
The US military’s research wing DARPA poured half a billion dollars into autonomous vehicles in the 1980s. Dozens of automakers including Mercedes-Benz… Audi… and Ford took a crack at building them too.
But they all failed… for the same reason. They couldn’t figure out how to teach a computer to “see.”
As I mentioned last week, “seeing” is second nature to humans. But it’s been extremely difficult for computers… until now.
A couple of years back even the world’s top computer couldn’t tell a dog from a muffin. No joke. So what hope was there for self-driving cars to recognize joggers and cyclists like a human?
But in 2012, researchers rewired the way machines see… and they gained a new superpower known as computer vision. (You can learn how to access my No. 1 computer vision stock here.)
The quantum leap came from teaching machines to identify millions of jumbled photos of everything from dogs… to churches… to fridges.
But here’s the thing… once you teach a machine to see dogs, you can teach it to spot pedestrians and cyclists.
This “big breakthrough” happened in 2012, but it’s only starting to filter into self-driving vehicles.
Think of computer vision like the internet. They are “platforms” that innovators use to build world-changing disruptions. The thing is… it takes years to figure out how to use these transformational inventions.
The internet was first created in the 1970s. But tech geniuses didn’t build the tools that disrupted the world until the ‘90s.
Self-driving technology has been in that twilight zone for the past decade. Innovators have been hard at work crafting computer vision systems for cars.
And now, we’re getting darn close to being ready for prime time.
Longtime RiskHedge readers know Google’s Waymo is the front-runner in the self-driving race.
It’s been ferrying paid passengers around Phoenix in its minivans for the past year. And often with no safety drivers. Here’s a Waymo dropping NFL All-Star Larry Fitzgerald off at Starbucks:
Its robo-taxi service topped 100,000 trips last year. And now, Lyft users in Phoenix can hail one of Waymo’s self-driving minivans.
In fact, Waymo has now driven 20 million “fully-driverless” miles on US roads. These incredible achievements are possible because of its computer vision system.
Its latest robotaxis are fitted with 29 special cameras that allow the car’s centralized “brain” to see in a 360 °view.
It can spot stop signs from over 1,600 feet away. Its learned how to react when traffic lights change color… and what to do “on the fly” when it approaches road work.
Its computer vision cameras can even detect school buses. They then caution the car to watch out for children stepping off of the sidewalk.
Remember, not so long ago, machines couldn’t tell a dog from a baked good. By harnessing the power of computer vision, Waymo’s cars have learned what to do when they see debris on the road… or pedestrians on electric scooters.
It’s currently in advanced talks to scoop up robocar startup Zoox for $2 billion.
When completed, it will be the largest acquisition in self-driving history!
Not many folks have heard of Zoox. But a leading venture capitalist “insider” told me it’s a dark horse in this race.
Amazon jumped at the chance to buy Zoox for its top-shelf computer vision technology.
Zoox’s cars cruise around the busy streets of Las Vegas almost daily. They drive in and out of bustling hotel pickup areas… nudge around construction zones… and merge into high-speed traffic. And the safety driver doesn’t have to touch the steering wheel once!
Check out how Zoox’s car sees the world, here:
Zoox’s high-resolution “eyes” recognize everything around them. They detect cars on the opposite side of the road… see the color of traffic lights… and track pedestrians on the sidewalk.
The real magic is how Zoox’s car “understands” what it sees. For example when it spots an ambulance with its sirens on, it knows it might have to pull over and let it pass.
Remember, TuSimple hauls 20+ cargo loads around in its robotrucks for UPS each week.
Its fleet of big-rigs are fitted with nine cameras and a bunch of image-capturing sensors. This vision hardware enables the truck to “see” forward for a half mile.
This is how the robo-truck can steer its way along highways…during the day or night… and safely drop off the cargo.
In short, computer vision is fueling a wave of rapid innovation in the self-driving space.
With Waymo’s robotaxis… and Zoox’s “hands off” trips through San Francisco and Las Vegas… this disruption is advancing faster than ever.
I don’t say this lightly: Computer vision is the greatest disruptive innovation in a generation. It will be bigger than the PC, internet, and smartphone.
And now’s the time to take advantage, before it’s all over the mainstream media.
You can learn more about his exciting trend, including details on my favorite stock to play it, by going here.
Stephen McBride
Editor — Disruption Investor