Arctos Labs Edge Cloud Optimization (ECO)

A unique solution for optimizing how applications can be distributed across cloud-to-edge continuum

Over the last decade, the need for IT & digital communication has accelerated and are nowadays increasingly business critical. An accelerating trend to move workloads to the cloud enables greater flexibility and economies of scale.

The arrival of edge computing gives additional opportunities for solutions in a world that is increasingly focused on processing of data. Gartner predicts[1] 75% of enterprise data will be processed outside the enterprise network by 2025 (10% in 2018).


A model driven optimization engine powered by AI

Optimization is a relative word! What is optimal in some scenario may not be optimal in another. This calls for a flexible optimization engine that can cater for a range of optimization criterias. The core engine of ECO is built on model-driven code generation and AI technologies to support required flexibility and optimization capabilities.

The advanced technologies of the core engine enables ECO to consider multiple services/applications to achieve holistic optimization.

Applications are service-chains

Edge computing often depict the application as if it consists of only one component, the edge application. In reality, applications (or services) are made up of a number of application components (SW) that are stitched together in a topology, having interaction with the outer world. It is imperative to properly model such a topology and the constraints on service links (e.g. latency & jitter) and compute resources to be able to bring automation into edge compute orchestration.

ECO utilizes such an application service model as one important input to the placement optimization, thus enabling ECO to conclude the most optimal location along the cloud-to-edge continuum for each and every application component in the service chain.

Optimal placement depends on the infrastructure resource characteristics

Edge computing typically impose there are limited amount of resources available. It also implies that those compute locations are interconnected over data transport networks across an cloud-to-edge continuum where latency and jitter needs to be considered. ECO utilize resource infrastructure metrics to take the scarce edge infrastructure into account when calculating optimal placement of application components.

Built to be integrated in orchestration systems

ECO does not contain full orchestration features. Instead, it uses a plug-in architecture to make it easy to integrate ECO capabilities and the ECO optimization engine into systems built on platforms such as: Openstack, Kubernetes, SmartEdge or VMware.

ECO compared

When comparing ECO with alternative approaches, we find some unique features:

  • Declarative approach
  • Service chain across cloud-to-edge continuum
  • Placement with resource-awareness considering compute & data transport
  • Scaling and clustering across geography
  • Constraints across entire service topology
  • GitOps compatible
  • Holistic multi-service optimization to avoid fragmentation