Latest posts by Alexander Jarvis (see all)
- AA 008: What’s the best time in a startup’s life to approach an investor? - December 16, 2017
- AA 007: In what ways can Angel and Seed Investors have an impact in their portfolio companies and add value other than writing cheques? - December 16, 2017
- AA 006: What is an angel investor’s view if a founder is running 2 different startups at the same time? - December 15, 2017
A16z posted a very interesting slide presentation on ‘The End of Cloud Computing.” Don’t worry, software is still eating the world, but how data is processed will most likely start to change. The title is a little bit click baity in that is should be titled “The End of Cloud Computing (as we know it, particularly when related to IoT).”
The presentation posits that processing, or maybe more aptly ‘decisions’, will happen not in the cloud but at the point of where data is created and captured (This is effectively distributed data centers, or data centers on wheels). We are going back to the future as to where processing gets done.
Welcome to the world of distributed “edge intelligence.” For the first time we are collecting real world data about our environment and in massive quantities. Sophisticated end point devices (e.g. your car, or any IoT device) require distributed computing system at the ‘edge of the network’ and can’t utilised centralised data centres anymore. But, why?
Processing at the edge
Well, firstly what do we mean by ‘edge of the network?’ The best example to think of this is clearly Tesla and self-driving cars (“End points”). LIDAR and your other systems generate and capture gigabytes of data per mile, which even with a fixed line would take eons to upload. So if you can’t upload the data and you need to make decisions, data needs to be processed where it is collected. Throw in time dependency and spotty coverage, the notion of real time also becomes critical as if you need to wait to send data to the cloud, there can be meaningful issues- like dying by crashing into a moose. You need to do processing locally which means end point devices need to be powerful (This is potentially the future of NVIDEA and others like Intel, though NVIDEA has a large lead).
Machine learning at the edge
Furthermore, machine learning will need to be distributed too as it needs to happen at the end point for decisions to be made rapidly where those algorithms are required. In these instances, for key decisions to be made, agility rather than power at the edge is key.
However, in the context of a network with network effects, information will still be centrally retained, but selectively. Clearly, so much data is generated at the edge it can’t all be pushed centrally (Though I am sure telco’s would be happy to accommodate). Data needs to be selectively pushed centrally.
The cloud will be about learning and crunching real numbers. Information will be curated from the edge, aggregated, analysed and propagated back out to make a more agile loop at those edges.
It’s pretty obvious when you think about it, and it’s pretty interesting to think about.