Here we collect articles and papers that contributes our opinion about the evolution of modern communications landscape.
To get access: Press the wanted link and you will be redirected to a landing page. When you then are back here, the link will display the wanted document
Towards a more efficient and easier to use edge computing
Edge computing is currently increasingly applied. Initial use cases tend to be static with respect to which applications should be placed in which edge locations. When additional and more complex use cases emerge, and cloud/telco providers increase their presence with more locations, there needs to be more advanced orchestration capabilities to harvest potential performance gains, minimize costs, reduce their CO2 footprint, or improve resilience by intelligently place application components across the edge-to-cloud continuum.
This paper argues that the way forward is to introduce an intent-based concept that centres around the application and its required infrastructure characteristics complemented by cost metrics and a placement engine that can match such intents towards the specific infrastructure that is available.
Orchestration and optimization in industrial IoT (Internet of things)
This paper outlines how multi-domain orchestration and optimization capabilities can contribute to increased availability and reliability of industrial IoT systems by making operational capabilities more autonomous. By adding intelligence and optimization into software deployment and failure remediation on a holistic and multi-domain layer, orchestration and optimization capabilities will enable operational teams to be more responsive and achieve higher availability and reliability of their industrial IoT systems. These capabilities further help enterprises to manage their set of digital information technology (IT) / operational technology (OT) assets, explore more data-driven operations, and achieve a higher level of integration of IT and OT systems towards Industry 4.0 visions.
Dynamic placement optimization in edge computing scenarios can enable significant capex and opex savings, new research indicate
Edge computing typically reduces the amount of data sent upwards in networks thereby contributing to less energy being consumed. There is however a risk that smaller, distributed, and heterogeneous edge infrastructure will come with lower and varying utilization, leading to lower efficiency.
To better understand these topics, Arctos Labs, Wind River and RISE (Research Institutes of Sweden) has conducted research with the aim of better understanding:
Where is the most resource-efficient location to place a specific workload at a particular time in a dynamic edge-to-cloud continuum with a fleet of edge locations?
On the relation between distance and latency
With the arrival of edge computing, more and more applications are relying on a tight latency budget and more importantly on shorter latencies. This does not mean that all application components need to be as close as possible, but more that there is a relation between latency and application experience/functionality or similar characteristics.
This paper contains an examination on the relation between geographical distance and latency. As one can imagine, for shorter distances the latency variation becomes greater as it becomes more dependent on the detours needed to travel through the physical data network infrastructure topology. But at what distance does it start to matter? Read the Paper to find out
Going green @ the edge: Cost modelling of Edge compute
Data is increasingly fuelling the world to become increasingly digital, through technologies related to video, 5G, artificial intelligence, VR/AR etc being explored. To support all these new services, edge compute is promoted.
Edge compute introduce cost issues, that needs to be considered as input for business decisions to build or use a distributed set of clouds. Economy-of-scale aspects may make smaller data centres less efficient. It is also well known that green-gas emissions from data centres are significant, and on the rise, whereby efficiency becomes increasingly important. Also, the cost of data transport is already significant and today contributing to double-digit figures of global electricity consumption. Those figures can only be expected to increase as the world becomes more dependent on data.
These two trends raise the question on whether it is more efficient to move around data or whether less data transfer would justify a potentially less efficient data centre closer to the user.
Placement of Workloads in Edge & Cloud Networks
Edge compute promise to deliver cloud computing capabilities for tomorrows applications and exploding amount of data.
Networks with edge compute are complex, and the risk being underutilized and nonprofitable is prominent. This is because the delicate balance of proximity to users to achieve performance vs the scale to enable a stable statistical utilization.
This paper introduces Arctos Labs placement technology to enable workload placement optimization that continuously can assess the most optimal distribution of workload components across an edge-to-central cloud continuum in a closed-loop concept.
Open Networking Summit 2019
Using open source in networking is a trend that accelerates continuously. Service providers are increasingly going from talking about it to actually deploying open source. Open Networking Summit is one of the premier events on this trend. Arctos Labs attended this years show, and the report is available here.
Validating VNFs on a Cloud Infrastructure
The ability to understand the full scope of testing and validation of VNFs are crucial to CSPs in acheiving needed quality. CSPs need to asses which aspects of testing and validation are to be carried out in-house, and what is to be covered elsewhere. There is no obviuos “one-size-fits-all”, and variuos initiatives launched cover only a subset of aspects. This paper outlines all aspects and discuss variuos approaches possible.