Artificial Intelligence: Edge vs Cloud Pt.2

Edge Vs Cloud Computing
2 minutes

In the  second article of the series  based on the panel AI On The edge vs the cloud that took place during the Industrial AI Summit and with Alex West – Principal Analyst at Omdia, Matteo Dariol – Lead Innovation Strategist at Bosch Rexroth, Anders Rahm-Nilzon – Director of Cloud centre of excellence at Volvo Group, and our CEO and Data Science Director Eric Topham   we elaborate on the cloud and edge paradigm for companies  what are the elements that are impacting the future development and deployment of industrial AI

Do companies have a good understanding of the benefits and the differences between edge and cloud?

In the past there was a general trend in the industry to oscillate between going all-in on cloud solutions, forgetting the edge and then swinging on the other end going all-in on edge solutions, and forgetting the cloud as a viable option. This time it’s different; more and more companies are realizing about these oscillations and how they ultimately get trapped in the so-called fog computing.

There is a cloud and edge paradigm in the industry: there is no cloud without a good edge and vice versa. And that is where the future of the industry lies.  Once companies find their network in the cloud, once they have used massive computation, parallel computation and others, they need to be able to bring their network down to the edge and use their network where the data is being produced. Organisations indeed need the cloud to store a massive amount of data on an infinite scale, improving their capabilities and computational power, but they need to rely on a good edge on the shop floor as well.

From a data science perspective, federated learning can be helpful for organisations to understand the value of the edge-cloud paradigm. Manufacturers are now starting to realise their full potential and are more receptive and knowledgeable compared to a few years ago.

With federated learning, for instance, there are several potential upsides for clients who are not still hesitant. One is obviously in terms of computing power, as it can be a lot more efficient to do that training at the edge, and then simply do the aggregation centrally. This also allows manufacturers and organisations to control and often reduce their costs. The second one is from a privacy point of view. Subject to some pretty constrained GDPR considerations, being able to actually train models without moving the data and then simply return the abstracted results, allows us to overcome several privacy issues, whilst still being compliant.

There is as well an additional paradigm, which is a distributed one, which is where instead, we execute wherever we are.

Specifically for ML applications, organisations can push their training pipeline down to the edge, then train, effectively, one model per device; a solution that works particularly well in situations where companies have devices that have heterogeneous distributions between them.

In part 3 we will explore what is  impacting the future development and deployment of industrial AI – to read part 1 click here 

Do you want to find out more?

TO FIND OUT MORE ABOUT THE PROJECT & OUR SERVICES, GET IN TOUCH WITH THE TEAM.