General - ICSOC 2022 https://icsoc2022.spilab.es The 20th International Conference on Service-Oriented Computing Mon, 28 Nov 2022 12:20:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 Enhancing Performance Modeling of Serverless Functions via Static Analysis https://icsoc2022.spilab.es/2022/11/28/enhancing-performance-modeling-of-serverless-functions-via-static-analysis/ Mon, 28 Nov 2022 12:20:41 +0000 https://icsoc2022.spilab.es/?p=1091   As developers, how to predict and optimize the performance of your serverless functions? Our paper “Enhancing Performance Modeling of Serverless Functions via Static Analysis” provides you the possibility to build your own analytical performance model for serverless workflows. The proposed method focuses on helping developers to extract model topology Read more…

The post Enhancing Performance Modeling of Serverless Functions via Static Analysis first appeared on ICSOC 2022.

]]>
.errordiv { padding:10px; margin:10px; border: 1px solid #555555;color: #000000;background-color: #f8f8f8; width:500px; }#advanced_iframe {visibility:visible;opacity:1;vertical-align:top;}.ai-info-bottom-iframe { position: fixed; z-index: 10000; bottom:0; left: 0; margin: 0px; text-align: center; width: 100%; background-color: #ff9999; padding-left: 5px;padding-bottom: 5px; border-top: 1px solid #aaa } a.ai-bold {font-weight: bold;}#ai-layer-div-advanced_iframe p {height:100%;margin:0;padding:0} .ai-fullscreen { position:fixed; z-index:9000 !important; top:0px !important; left:0px !important; margin:0px !important; width:100% !important; height:100% !important; }

 

As developers, how to predict and optimize the performance of your serverless functions? Our paper “Enhancing Performance Modeling of Serverless Functions via Static Analysis” provides you the possibility to build your own analytical performance model for serverless workflows. The proposed method focuses on helping developers to extract model topology by working on their source code, and devises an instrumentation strategy for code-level profiling to enhance the model accuracy. Accurate prediction of response time can be achieved with error rate below 7.3%!

Authors:

Runan Wang is currently pursuing the Ph.D. degree in Department of Computing, Imperial College London. Her research focuses on performance models, program analysis, and serverless computing.

Giuliano Casale is a Reader at Imperial College London. He teaches and does research in performance engineering and cloud computing and has published more than 150 refereed papers.

Antonio Filieri is an associate professor at Imperial College London, and a senior applied scientist with Amazon AWS. His research focuses on application of mathematical method for software engineering.

The post Enhancing Performance Modeling of Serverless Functions via Static Analysis first appeared on ICSOC 2022.

]]>
DeepThought: a Reputation and Voting-based Blockchain Oracle https://icsoc2022.spilab.es/2022/11/28/deepthought-a-reputation-and-voting-based-blockchain-oracle/ Mon, 28 Nov 2022 12:02:47 +0000 https://icsoc2022.spilab.es/?p=1085   Nowadays, publishing and retrieving information through web services (e.g., social networks, news aggregators) has become extremely easy. However, the lack of quality controls has made possible for fake news – inaccurate or sometimes forged information – to be spread. The blockchain is seen as a promising technology to address Read more…

The post DeepThought: a Reputation and Voting-based Blockchain Oracle first appeared on ICSOC 2022.

]]>
.errordiv { padding:10px; margin:10px; border: 1px solid #555555;color: #000000;background-color: #f8f8f8; width:500px; }#advanced_iframe_2 {visibility:visible;opacity:1;vertical-align:top;}.ai-info-bottom-iframe { position: fixed; z-index: 10000; bottom:0; left: 0; margin: 0px; text-align: center; width: 100%; background-color: #ff9999; padding-left: 5px;padding-bottom: 5px; border-top: 1px solid #aaa } a.ai-bold {font-weight: bold;}#ai-layer-div-advanced_iframe_2 p {height:100%;margin:0;padding:0} .ai-fullscreen { position:fixed; z-index:9000 !important; top:0px !important; left:0px !important; margin:0px !important; width:100% !important; height:100% !important; }

 

Nowadays, publishing and retrieving information through web services (e.g., social networks, news aggregators) has become extremely easy. However, the lack of quality controls has made possible for fake news – inaccurate or sometimes forged information – to be spread.

The blockchain is seen as a promising technology to address this issue. In particular, the blockchain allows to immutably and persistently store information on the provenance of each piece of news. However, the blockchain alone cannot certify the authenticity of the information. Instead, it must be combined with a so-called “oracle”. An oracle is a service made of machines or humans that validate the information before storing it in a blockchain.

In our paper, we present DeepThought, a blockchain oracle that combines voting and reputation schemes in order to certify with high confidence the authenticity of the information. Existing web services can exploit DeepThought to validate information or to retrieve already certified ones. Compared to existing techniques, DeepThought makes it extremely difficult for malicious users to collude and corrupt the service.

This work is partially funded by the Otto Moensted Foundation: https://omfonden.dk

Authors:

Marco Di Gennaro is a Computer Science and Engineering MSc Student at Politecnico di Milano. His research interests include Cybersecurity, Graph Representation Learning, and Blockchain.

Lorenzo Italiano is a PhD student in Information Technology at Politecnico di Milano. His interests are related to cybersecurity and machine learning topics mostly applied in the smart mobility field.

Giovanni Meroni is an Assistant Professor at Technical University of Denmark. His research interests include Information Systems, Business Process Management, Blockchain, and the Internet of Things.

Giovanni Quattrocchi is an adjunct professor and post-doc at Politecnico di Milano, Italy. His research interests include self-adaptive systems, Edge- and Blockchain-based systems, and software engineering for AI.

The post DeepThought: a Reputation and Voting-based Blockchain Oracle first appeared on ICSOC 2022.

]]>
Cheops, a service to blow away Cloud applications to the Edge https://icsoc2022.spilab.es/2022/11/28/cheops-a-service-to-blow-away-cloud-applications-to-the-edge/ Mon, 28 Nov 2022 11:42:41 +0000 https://icsoc2022.spilab.es/?p=1081 https://gitlab.inria.fr/discovery/cheops Authors: Marie Delavergne is a PhD candidate at Inria (France). She received a Master in software architecture from the University of Nantes in France, in 2018. Her research interests are massively distributed cloud computing infrastructures for Edge computing. She is one of the main developer of Cheops. Geo Johns Read more…

The post Cheops, a service to blow away Cloud applications to the Edge first appeared on ICSOC 2022.

]]>
https://gitlab.inria.fr/discovery/cheops

Authors:

Marie Delavergne is a PhD candidate at Inria (France). She received a Master in software architecture from the University of Nantes in France, in 2018. Her research interests are massively distributed cloud computing infrastructures for Edge computing. She is one of the main developer of Cheops.

Geo Johns Antony is a PhD candidate at Inria (France). He received his Master in computer science and engineering with specialization in big data analytics from Vellore Institute of technologies (India) in 2020. His research fields include Distributed computing for Cloud and Edge systems, frameworks and applications. His current work is on the concept of generalizing the geo-distribution of resources in Cloud and Edge systems. His work is part of the Cheops project, to which he is one of the main contributors.

Adrien Lebre is a full professor at IMT Atlantique (France), head of the STACK research group, and PI of the Open Science Discovery Initiative. He holds a Ph.D. from Grenoble Institute of Technologies and a habilitation from University of Nantes. His activities focus on large-scale distributed systems, their design, compositional properties and efficient implementation. Since 2015, his activities have been mainly focusing on the Edge Computing paradigm, in particular in the OpenStack ecosystem.

The post Cheops, a service to blow away Cloud applications to the Edge first appeared on ICSOC 2022.

]]>
IoT System for Occupational Risks Prevention at a WWTP https://icsoc2022.spilab.es/2022/11/28/iot-system-for-occupational-risks-prevention-at-a-wwtp/ Mon, 28 Nov 2022 11:32:49 +0000 https://icsoc2022.spilab.es/?p=1079   Biohazards and noise risks in wastewater treatment plants are a real concern. These stations generate risks of gas inhalation due to contaminants carried by the wastewater and exposure to dangerous high noise generated by the work equipment. The stations are equipped with sensors that are capable of monitoring ambient Read more…

The post IoT System for Occupational Risks Prevention at a WWTP first appeared on ICSOC 2022.

]]>
.errordiv { padding:10px; margin:10px; border: 1px solid #555555;color: #000000;background-color: #f8f8f8; width:500px; }#advanced_iframe_3 {visibility:visible;opacity:1;vertical-align:top;}.ai-info-bottom-iframe { position: fixed; z-index: 10000; bottom:0; left: 0; margin: 0px; text-align: center; width: 100%; background-color: #ff9999; padding-left: 5px;padding-bottom: 5px; border-top: 1px solid #aaa } a.ai-bold {font-weight: bold;}#ai-layer-div-advanced_iframe_3 p {height:100%;margin:0;padding:0} .ai-fullscreen { position:fixed; z-index:9000 !important; top:0px !important; left:0px !important; margin:0px !important; width:100% !important; height:100% !important; }

 

Biohazards and noise risks in wastewater treatment plants are a real concern. These stations
generate risks of gas inhalation due to contaminants carried by the wastewater and exposure
to dangerous high noise generated by the work equipment. The stations are equipped with
sensors that are capable of monitoring ambient gas levels and noise levels. This is not
sufficient and geolocation of the operators is necessary. However, indoor geolocation is still a
problem due to limited GPS accuracy. There are alternatives such as Bluetooth, which allow
more accurate geolocation to be obtained. In this work, we present a IoT system that allows to
geolocate the operators indoor through Bluetooth beacons and cross-reference it with the
information from gas and noise sensors to prevent occupational risks.

Authors:

Sergio Laso is currently working at Global Process and Product Improvement company doing an industrial PhD. His research interests are Mobile Computing, Pervasive Systems, Edge Computing and the Internet of Things.

Daniel Flores-Martin is a PhD. student at the University of Extremadura (Spain). His research interests are mobile computing, Context-Awareness, Crowd Sensing, and the Internet of Things.

Juan Pedro Cortés-Pérez is an associate professor at the University of Extremadura. Researcher in BIM technology in sustainability, health and safety and life cycle of infrastructures.

Javier Berrocal is an associate professor in the Department of Informatics and Telematics System Engineering at the University of Extremadura (Spain). His research interests include Distributed Systems, Mobile Computing, Pervasive Systems and the Internet of Things.

Juan M. Murillo is a full Professor at the University of Extremadura, Spain and CEO of the COMPUTAEX foundation. His research interests include Quantum Software Engineering, mobile computing, and cloud computing.

 

The post IoT System for Occupational Risks Prevention at a WWTP first appeared on ICSOC 2022.

]]>
Using OpenAPI for the development of Hybrid Classical-Quantum Services https://icsoc2022.spilab.es/2022/11/28/using-openapi-for-the-development-of-hybrid-classical-quantum-services/ Mon, 28 Nov 2022 11:02:47 +0000 https://icsoc2022.spilab.es/?p=1076 Quantum Computing has started to demonstrate its first practical applications. As the technology develops to a point of maturity that allows quantum computers to expand commercially, large companies such as Google, Microsoft, IBM and Amazon are making a considerably effort to make them accessible through the cloud so that research Read more…

The post Using OpenAPI for the development of Hybrid Classical-Quantum Services first appeared on ICSOC 2022.

]]>
Quantum Computing has started to demonstrate its first practical applications. As the technology develops to a point of maturity that allows quantum computers to expand commercially, large companies such as Google, Microsoft, IBM and Amazon are making a considerably effort to make them accessible through the cloud so that research and industry initiatives can test their capabilities. The characteristics of this paradigm and the lack of mature tools still make the process of defining, implementing, and running quantum, or hybrid classical-quantum software systems difficult compared with the procedures used for pure classical ones. To address this lack, we present a demonstration of a method for defining quantum services and the automatic generation of the corresponding source code through an extension of the OpenAPI Specification. In this demo we present an extension that enables developers to define quantum services with a high abstraction level, link them with quantum circuits, and generate the source code of the service to be deployed in a quantum computer in the same way they do for classical services.

Authors:

Javier Romero-Álvarez is a PhD student and research scientist fellow at the University of Extremadura, Spain. He obtained his degree in Computer Science from the University of Extremadura in 2020. His research interests include web engineering, quantum computing, service-oriented computing, mobile computing and Chatbots development.

Jaime Alvarado-Valiente is a PhD student and research scientist fellow at the University of Extremadura, Spain. He obtained his degree in Computer Science from the University of Extremadura in 2020. His research interests include web engineering, quantum computing, service-oriented computing and Chatbots development.

Enrique Mogel is an assistant professor at the University of Extremadura (Spain). He completed his MSc in Computer Science at the University Carlos III (Spain) in 2010 and a PhD in Computer Science at the University of Extremadura in 2018. His research interests include Web Engineering, Smart Systems, and Quantum Computing.

Jose Garcia-Alonso is an Associate Professor at the University of Extremadura, Spain, where he completed his PhD in software engineering in 2014. He is the co-founder of Gloin, a software consulting company, and Health and Aging Tech, an eHealth company. His interests include web engineering, quantum software engineering, pervasive computing, eHealth, and gerontechnology.

Juan M. Murillo is Full Professor at the University of Extremadura, Spain, CEO of the COMPUTAEX foundation, and co-founder of the start-up Gloin. His research interests include software architectures, Pervasive and Mobile Computing and Quantum Computing. Murillo holds a PhD in computer science from the University of Extremadura.

The post Using OpenAPI for the development of Hybrid Classical-Quantum Services first appeared on ICSOC 2022.

]]>
Node4Chain: Extending Node-RED Low-code Tool for Monitoring Blockchain Networks https://icsoc2022.spilab.es/2022/11/28/node4chain-extending-node-red-low-code-tool-for-monitoring-blockchain-networks/ Mon, 28 Nov 2022 10:45:02 +0000 https://icsoc2022.spilab.es/?p=1074 Blockchain is a cutting-edge distributed technology that provides security, immutability, traceability and transparency of data. Normally, the use of this technology requires certain technical knowledge. Moreover, integrating this technology with other systems is an even more complex task. To deal with this challenge, in this demo we present Node4Chain, a Read more…

The post Node4Chain: Extending Node-RED Low-code Tool for Monitoring Blockchain Networks first appeared on ICSOC 2022.

]]>
Blockchain is a cutting-edge distributed technology that provides security, immutability, traceability and transparency of data. Normally, the use of this technology requires certain technical knowledge. Moreover, integrating this technology with other systems is an even more complex task. To deal with this challenge, in this demo we present Node4Chain, a novel extension of the Node-RED low-code tool that allows for defining graphical flows that specify data inputs, outputs and processing logic needed to monitor in real time blockchain networks such as Ethereum, Binance or Polygon, as well as integrating it with other systems and technologies. Thanks to the use of the low-code paradigm, performing this type of tasks is possible for people with less technical knowledge.

Authors:

Jesús Rosa-Bilbao is an Assistant Professor and a Ph.D. student in the UCASE Software Engineering Research Group at the University of Cadiz (UCA), Spain.

Juan Boubeta-Puig is an Associate Professor with the Department of Computer Science and Engineering and member of UCASE Software Engineering Research Group at the University of Cadiz (UCA), Spain.

The post Node4Chain: Extending Node-RED Low-code Tool for Monitoring Blockchain Networks first appeared on ICSOC 2022.

]]>
GreenFog: A Framework for Sustainable Fog Computing https://icsoc2022.spilab.es/2022/11/28/greenfog-a-framework-for-sustainable-fog-computing/ Mon, 28 Nov 2022 09:49:31 +0000 https://icsoc2022.spilab.es/?p=1071   Fog computing is distributed computing paradigm that brings computation and data storage closer to the sources of data for many IoT applications. In recent years, an alarming rate of increase in energy demand and the carbon footprint of fog environments has become critical issue. It is, therefore, necessary to Read more…

The post GreenFog: A Framework for Sustainable Fog Computing first appeared on ICSOC 2022.

]]>
.errordiv { padding:10px; margin:10px; border: 1px solid #555555;color: #000000;background-color: #f8f8f8; width:500px; }#advanced_iframe_4 {visibility:visible;opacity:1;vertical-align:top;}.ai-info-bottom-iframe { position: fixed; z-index: 10000; bottom:0; left: 0; margin: 0px; text-align: center; width: 100%; background-color: #ff9999; padding-left: 5px;padding-bottom: 5px; border-top: 1px solid #aaa } a.ai-bold {font-weight: bold;}#ai-layer-div-advanced_iframe_4 p {height:100%;margin:0;padding:0} .ai-fullscreen { position:fixed; z-index:9000 !important; top:0px !important; left:0px !important; margin:0px !important; width:100% !important; height:100% !important; }

 

Fog computing is distributed computing paradigm that brings computation and data storage closer to the sources of data for many IoT applications. In recent years, an alarming rate of increase in energy demand and the carbon footprint of fog environments has become critical issue. It is, therefore, necessary to reduce the energy consumption of these systems and integrate renewable energy use into fog computing environments. Renewable energy sources, however, are prone to availability fluctuations due to their variable and intermittent nature. In this work, we propose a new Fog computing framework for fog load shaping to match energy consumption with renewable energy availability using adaptive Quality of Service (QoS). The proposed framework, along with the optimization techniques, are tested on a real-world micro data center (Fog environment) powered by a solar energy source connected to multiple IoT devices. The results show that our proposed framework can offer significant brown energy usage reduction for Fog environments.

Authors:

Adel N. Toosi is the director of Distributed Systems and Network Applications (DisNet) Laboratory and a Senior Lecturer at Monash University, Australia. His research interests include Serverless Computing, Edge Computing, and Sustainable IT.

Chayan Agarwal is currently working as a DevOps Engineer at Amadeus IT group. He received his CSE degree from the Manipal Institute of Technology, Manipal in 2017 and his MS in IT from Monash University in 2021.

Lena Mashayekhy is an Associate Professor in the Department of Computer and Information Sciences at the University of Delaware. Her research interests include edge/cloud computing, Internet of Things, and algorithmic game theory.

Sara Kardani Moghaddam received Ph.D. from the University of Melbourne and an ME degree in Information Technology from the Sharif University of Technology. Her research interests are large-scale distributed systems, performance management, and unsupervised ML.

Redowan Mahmud is a Lecturer (Computing Discipline) at Curtin University. He received a Ph.D. from the University of Melbourne. His research interests include IoT and Edge computing.

Zahir Tari is a full professor in Distributed Systems at RMIT University (Australia) and the Research Director of the Centre of Cyber Security Research and Innovation (CCSRI). Zahir is one of the leading international experts in performance/scalability/reliability (of large-scale systems) and cyber-security (of critical systems).

The post GreenFog: A Framework for Sustainable Fog Computing first appeared on ICSOC 2022.

]]>
Scalable Discovery and Continuous Inventory of Personal Data at Rest in Cloud Native Systems https://icsoc2022.spilab.es/2022/11/17/scalable-discovery-and-continuous-inventory-of-personal-data-at-rest-in-cloud-native-systems/ Thu, 17 Nov 2022 13:55:36 +0000 https://icsoc2022.spilab.es/?p=954 Authors Elias Grünewald is researching on Privacy Engineering, Cloud/Fog Computing and the interaction of computer science and society as a research associate at the Information Systems Engineering Research Group at TU Berlin since 11/2020. He completed his Master of Science in Information Systems Management at TU Berlin and Politecnico di Read more…

The post Scalable Discovery and Continuous Inventory of Personal Data at Rest in Cloud Native Systems first appeared on ICSOC 2022.

]]>

Authors

Elias Grünewald is researching on Privacy Engineering, Cloud/Fog Computing and the interaction of computer science and society as a research associate at the Information Systems Engineering Research Group at TU Berlin since 11/2020. He completed his Master of Science in Information Systems Management at TU Berlin and Politecnico di Milano. Prior to this, he completed a Bachelor’s degree in Computer Science and Management at TU Berlin and Universidad de Sevilla.

During his studies, he was a long-time teaching staff member for the courses on Information Governance, Information Systems & Data Analysis, Programming, and Internet & Privacy (2015-2020). Afterwards, while still a student, he did research on AI-based transparency and data access in the project DaSKITA (2020). His achievements were supported by Alliance4Tech, Erasmus+ and the federal scholarship Deutschlandstipendium.

Elias is involved in science and higher education policy, among others in the Academic Senate (2019-2020), the Student Parliament (2016-2020), the Faculty Board IV Electrical Engineering and Computer Science (2017-2020), various professorial appointment committees, examination boards and ad hoc commissions of TU Berlin or the Berlin University Alliance as well as the European Universities Initiative ENHANCE.

 

Leonard Schurbert, Data Engineer, Alumnus TU Berlin

The post Scalable Discovery and Continuous Inventory of Personal Data at Rest in Cloud Native Systems first appeared on ICSOC 2022.

]]>
SCORE: A Resource-Efficient Microservice Orchestration Model Based on Spectral Clustering in Edge Computing https://icsoc2022.spilab.es/2022/11/14/score-a-resource-efficient-microservice-orchestration-model-based-on-spectral-clustering-in-edge-computing/ Mon, 14 Nov 2022 09:51:43 +0000 https://icsoc2022.spilab.es/?p=948   In recent years, with the rapid popularization of the fifth-generation mobile communication network and the continuous development of mobile Internet applications, the number of intelligent terminal devices has shown rapid growth. More and more services are sinking to the edge, which brings about the explosive growth of data volume Read more…

The post SCORE: A Resource-Efficient Microservice Orchestration Model Based on Spectral Clustering in Edge Computing first appeared on ICSOC 2022.

]]>
.errordiv { padding:10px; margin:10px; border: 1px solid #555555;color: #000000;background-color: #f8f8f8; width:500px; }#advanced_iframe_5 {visibility:visible;opacity:1;vertical-align:top;}.ai-info-bottom-iframe { position: fixed; z-index: 10000; bottom:0; left: 0; margin: 0px; text-align: center; width: 100%; background-color: #ff9999; padding-left: 5px;padding-bottom: 5px; border-top: 1px solid #aaa } a.ai-bold {font-weight: bold;}#ai-layer-div-advanced_iframe_5 p {height:100%;margin:0;padding:0} .ai-fullscreen { position:fixed; z-index:9000 !important; top:0px !important; left:0px !important; margin:0px !important; width:100% !important; height:100% !important; }

 

In recent years, with the rapid popularization of the fifth-generation mobile communication network and the continuous development of mobile Internet applications, the number of intelligent terminal devices has shown rapid growth. More and more services are sinking to the edge, which brings about the explosive growth of data volume on edge devices and the sharp rise in computing power demand. Compared with traditional cloud data centers, characterized by centralized data, centralized computing power, and a stable network environment, edge scenarios are distributed, dynamic, and resource-constrained. At the same time, there are strict requirements on the response time of applications in particular scenarios such as autonomous driving, telemedicine, and augmented reality. Completing data processing and application response locally on edge devices has become an increasingly urgent requirement in edge computing scenarios, reducing bandwidth resource usage, protects user data privacy, and effectively improves application response speed and user experience. With the development of lightweight virtualization technology and the maturity of microservice architecture, deploying containerized applications on edge devices has become a mainstream application mode in edge scenarios. Compared with the traditional monolithic application, the microservice architecture has the characteristics of loose coupling, which enables the application to be created, updated, and terminated independently and conveniently. However, the challenge of deploying microservice applications in edge scenarios comes from the contradiction between the latency sensitivity of applications and the limited node resources. Existing container orchestration tools do not fully consider the coupling relationship between applications in microservice scheduling and resource allocation. The resource application is not accurately evaluated during the cluster scaling process, resulting in unnecessary cross-node communication and waste of resources.

 

Therefore, this paper proposes a microservice orchestration model for edge scenarios, which can realize microservice scheduling based on spectral clustering and resource allocation based on a sliding window mechanism under the constraints of multi-dimensional indicators. Graphs depict the dependencies between microservices, and then spectral clustering is used to map microservices to edge nodes, significantly reducing the cross-node communication traffic between microservices. At the same time, in the process of cluster scaling under the constraints of multiple indicators, the resource request volume of the application is accurately analyzed, and the sliding window mechanism is used for more fine-grained resource allocation, which improves the resource utilization rate while ensuring the quality of service. The main work and innovations of this paper include the following points:

 

  1. Design and implement a microservice scheduling algorithm based on spectral clustering (MSSC). In the edge scenario, the application based on microservice architecture is equivalent to a group of different sample objects dispersed to each cluster node. There is a complex business logic relationship between microservices, which can be a form of similarity between microservices. Microservices exchange information with each other through cross-node communication or intra-node communication. At this time, unreasonable microservice scheduling may increase the cluster’s overall cross-node communication overhead and the application response time. This paper proposes a microservice scheduling algorithm based on spectral clustering in response to this situation. The main idea is to use the degree of interaction between microservices to measure the closeness of the relationship between microservices and then map microservices to nodes. Map the connections between microservices as edges, and the weights of the edges represent the size of the data flow interacting between microservices, map the entire application into a weighted phase-free graph, and convert the scheduling problem into a graph segmentation problem. The nodes in the graph are divided by the spectral clustering algorithm, and different clusters correspond to other nodes. Finally, the microservices are scheduled to corresponding nodes based on the clustering results to minimize the overall cross-node communication overhead of the cluster and effectively reduce the response delay of the application. The experimental results show that compared with the traditional method, the proposed MSSC algorithm increases the intra-node traffic ratio by 2.7 times, decreases the inter-node communication traffic by 17.7%, and reduces the minimum, average and maximum of the application response time by 37.8%, 10.7%, and 26.6%, respectively.

 

  1. A dynamic allocation algorithm for microservice resources under the constraints of multi-dimensional indicators is designed and implemented. Improving resource utilization is essential for technological progress, especially when resources are limited in edge scenes. Currently, mainstream container orchestration platforms have implemented general resource allocation algorithms but have not been effectively optimized for edge application scenarios. This paper first introduces the principle of the existing microservice resource allocation algorithm, quantitatively analyzes the resource utilization efficiency, and finds that resources can be further compressed under the premise of ensuring service quality. The dynamic resource allocation algorithm based on the sliding window mechanism under the constraint of the dimension index can effectively compress the resource application amount through the dynamic allocation of resources under the premise of ensuring the specified service quality so that the physical nodes can carry more services under the same access pressure, effectively improved resource utilization. At the same time, the sliding window mechanism implemented by the time series database can effectively reduce the system turbulence caused by changes in cluster resource allocation. Experiments show that, while ensuring that the total number of processed requests and the response time is unchanged during the cluster scaling process, the average memory requirements for processing a single request on two high-load microservice applications are generally equivalent to 19.4% and 45.8% of the HPA mechanism.

 

  1. A simulated heterogeneous edge cluster is built. Through the analysis and processing of the Alibaba open source microservice dataset, the flow characteristics in the production environment are simulated to implement pressure measurement on the cluster. To verify the algorithm’s performance in the edge scenario, we adopt the combination of the virtual machine and Raspberry PI to build the simulated edge heterogeneous cluster. The Google open source microservice application Online Boutique is used as the test object, and the distributed test platform Locust simulates the traffic characteristics in the natural environment.

 

Authors

Ning Li, Yusong Tan, Xiaochuan Wang, Bao Li and Jun Luo.

The post SCORE: A Resource-Efficient Microservice Orchestration Model Based on Spectral Clustering in Edge Computing first appeared on ICSOC 2022.

]]>
DeepSCJD: An Online Deep Learning-based Model for Secure Collaborative Job Dispatching in Edge Computing https://icsoc2022.spilab.es/2022/11/14/deepscjd-an-online-deep-learning-based-model-for-secure-collaborative-job-dispatching-in-edge-computing/ Mon, 14 Nov 2022 09:46:22 +0000 https://icsoc2022.spilab.es/?p=930   Edge computing alleviates the dilemma of insufficient abilities in mobile devices as well as great latency and bandwidth pressure in remote cloud. Prior approaches that dispatch jobs to a single edge cloud are prone to cause task accumulation and excessive latency due to the uncertain workload and limited resources Read more…

The post DeepSCJD: An Online Deep Learning-based Model for Secure Collaborative Job Dispatching in Edge Computing first appeared on ICSOC 2022.

]]>
.errordiv { padding:10px; margin:10px; border: 1px solid #555555;color: #000000;background-color: #f8f8f8; width:500px; }#advanced_iframe_6 {visibility:visible;opacity:1;vertical-align:top;}.ai-info-bottom-iframe { position: fixed; z-index: 10000; bottom:0; left: 0; margin: 0px; text-align: center; width: 100%; background-color: #ff9999; padding-left: 5px;padding-bottom: 5px; border-top: 1px solid #aaa } a.ai-bold {font-weight: bold;}#ai-layer-div-advanced_iframe_6 p {height:100%;margin:0;padding:0} .ai-fullscreen { position:fixed; z-index:9000 !important; top:0px !important; left:0px !important; margin:0px !important; width:100% !important; height:100% !important; }

 

Edge computing alleviates the dilemma of insufficient abilities in mobile devices as well as great
latency and bandwidth pressure in remote cloud. Prior approaches that dispatch jobs to a single edge
cloud are prone to cause task accumulation and excessive latency due to the uncertain workload and
limited resources of edge servers. Offloading tasks to lightly-loaded neighbors, which are multiple
hops away, alleviates the dilemma but increases transmission cost and security risks. Consequently,
performing job dispatching to realize the trade-off between computing latency, energy consumption
and security while ensuring users’ Quality of Service (QoS) is a complicated challenge.
Static model-based algorithms are inapplicable in dynamical edge environment due to the long
decision-making time and significant computational overhead. While deep learning methods such
as deep reinforcement learning (DRL) make decisions in a global view and adapts to the variations
in edge. Besides, graph neural networks (GNN) effectively transmit node information and extracts
features over graphs, which are naturally fit with directed acyclic graph (DAG) jobs. Some existing
works using deep learning methods don’t take the workload of servers and data security risks during
offloading into consideration.
The secure collaborative job dispatching in edge can be defined as: for each subtask in the job, we
make a dispatching decision about its execution location (the destination edge server to perform this
task and the corresponding transmit path), while achieving a trade-off between computing latency
and offload cost to ensure users’ QoS and service providers’ profits. Tasks are executed on a certain
edge server either in the local edge cloud or a non-local one beyond several hops.

In this paper, we propose an online deep learning-based secure collaborative job dispatching model
(DeepSCJD) in edge computing. The framework is as follows: the historical data of resource
utilization of edge servers is input into the Bi-directional Long Short-Term Memory (BiLSTM)
model to predict the workload situation at the next moment, which is regarded as one of dimensions
of servers’ characteristics. In the meantime, we process the network topology to obtain initial
features of the edge servers. The Graph Attention Networks (GAT), which is one kind of GNN
networks, extract and aggregate the features of jobs and edge servers separately, deriving highdimensional abstract ones. These two outputs are concated to compose the current state. In DRL, a
two-branch agent involving a linear and DQN branch makes job dispatching decisions.
Next, tasks are offloaded and performed on destination edge servers to generate a reward. The agent
detects this reward to adjust the dispatching behavior at the next step. Moreover, we update the
current load of edge servers and the time window of BiLSTM slides one bit backward. The new
workload is applied to the next round prediction. The occupancy conditions of CPU, Memory and
Disk of edge servers also vary, resulting in server attributes modifying accordingly.

We select some traditional baselines including K-Hop, which select the edge server randomly that
is K hops away, and Rand method, which randomly selects a server in global topo map. A deep
learning approach GRL is also regarded as comparation, which utilizes gradual convolutional
network and DRL make job dispatching decisions. We test our model under various conditions, i.e.
changing the number of jobs, the number of edge cloud nodes, the weight parameters in objective
function and the configurations of edge servers. Experiments on real-world data sets demonstrate
the efficiency of proposed model and its superiority over traditional and state-of-the-art baselines,
reaching the maximum average performance improvement of 54.16% relative to K-Hop. Extensive
evaluations manifest the generalization of our model under various conditions.

Ablation experiments are conducted to verify the effectiveness of each submodule in our model,
and we compared the model which remove some certain parts in the proposed model with
DeepSCJD itself. We see that each sub-module has a performance improvement for the final
decisions in all cases.

Authors

Zhaoyang Yu received the B.E. degree from College of Computer and Control Engineering in Nankai University (NKU), Tianjin, China, in 2018. She is currently pursuing the Ph.D. degree with the Department of Computer Science in NKU. Her research interests include mobile edge computing, task scheduling, resource management and deep learning.

 

 

Sinong Zhao received the B.E. degree from Honors College in Northwestern Polytechnical University, Xi’an, China, in 2018. She is currently working towards his Ph.D. at Nankai University. Her research interests include machine learning, data mining, and anomaly detection.

 

 

Tongtong Su received the B.S. degree in software engineering from the Tianjin Normal University, Tianjin, China, in 2016, and the M.S. degree from the School of Computer and Information Engineering, Tianjin Normal University, Tianjin, China, in 2019. He is currently pursuing the Ph.D. degree with the College of Computer Science, Nankai University, Tianjin, China. His research interests include distributed deep learning, computer vision, and model compression.

Wenwen Liu received her B.Sc. and M.Sc. degrees from the School of Computer Science and Technology, Tiangong University, Tianjin, China, in 2013 and 2016, respectively. She is currently pursuing the Ph.D. degree with the Department of Computer Science in NanKai University. Her research interests include machine learning, intelligent edge computing and wireless networks.

 

Xiaoguang Liu (Member, IEEE) received his B.Sc., M.Sc., and Ph.D. degrees in computer science from Nankai University, Tianjin, China, in 1996, 1999, and 2002, respectively. He is currently a professor at the Department of Computer Science and Information Security, Nankai University. His research interests include search engines, storage systems, and GPU computing.

 

Gang Wang (Member, IEEE) received his B.Sc., M.Sc., and Ph.D.\ degrees in computer science from Nankai University, Tianjin, China, in 1996, 1999 and 2002, respectively. He is currently a professor at the Department of Computer Science and Information Security, Nankai University. His research interests include parallel computing, machine learning, storage systems, computer vision, and big data.

 

Dr. Zehua Wang received his Ph.D. degree from The University of British Columbia (UBC), Vancouver in 2016 and was a Postdoctoral Research Fellow in the Wireless Networks and Mobile Systems (WiNMoS) Laboratory directed by Prof. Victor C.M. Leung from 2017 to 2018. He is now an Adjunct Professor in the Department of Electrical and Computer Engineering at UBC, a core-faculty member of Blockchain@UBC, and a Mentor of entrepreneurship@UBC. He is interested in applying the cryptography, zero-knowledge proof, and game theories in the protocol designs and Web3 applications. His research focuses on improving the synergy and security of the decentralized multi-agent systems. The major research projects that he is currently working on include blockchain and smart contract security, applications of zero-knowledge proof in blockchain, and decentralized privacy-preserving machine learning.

 

Victor C.M. Leung is PEng, PhD, FIEEE, FRSC, FEIC, FCAE, Professor Emeritus, Communications Group, Director, Wireless Networks and Mobile Systems (WiNMoS) Laboratory, Dept. of Electrical and Computer Engineering. His research interests are in the area of telecommunications and computer communications networking. The scope of the research includes design, evaluation, and analysis of network architectures, protocols, and network management, control, and internetworking strategies for reliable, efficient, and cost effective communications.

The post DeepSCJD: An Online Deep Learning-based Model for Secure Collaborative Job Dispatching in Edge Computing first appeared on ICSOC 2022.

]]>