Workflow Scheduling on Hybrid Fog-Cloud Environment Based on a Novel Critical Path Extraction Algorithm
الموضوعات :Fatemeh Davami 1 , Sahar Adabi 2 , Ali Rezaee 3 , Amir Masoud Rahamni 4
1 - Department of Computer engineering, Meymand center, Islamic Azad University, Meymand, Iran
2 - Department of Computer Engineering, North Tehran Branch, Islamic Azad University
3 - Department of Computer Engineering, Science and Research Branch, Islamic Azad University,Tehran,Iran
4 - Department of Computer Engineering, Science and Research Branch,
Islamic Azad University, Tehran, Iran
الکلمات المفتاحية: Multiple Workflow Scheduling, Distributed Algorithms, Cloud-Fog computing, Critical Path,
ملخص المقالة :
In the last ten years, the Cloud data centers have been manifested as the crucial computing architectures to enable extreme data workflows. Due to the complicatedness and diverse kinds of computational resources like Fog nodes and Cloud servers, workflow scheduling has been proposed to be the main challenge in Cloud and or Fog computing environments. For resolving this issue, the present study offers a scheduling algorithm according to the critical path extraction, referred to as the Critical Path Extraction Algorithm (CPEA). In fact, it is one of the new multi-criteria decision-making algorithms to extract the critical paths of multiple workflows because it is of high importance to find the critical path in the creation and control of the scheduling. Moreover, an extensive software simulation investigation has been performed to compare this new algorithm in the real work-loads and recent algorithm. We compare our algorithm with the GRP-HEFT algorithm. The experimental results confirm the proposed algorithm's superiority in fulfilling make-span and waiting time and that workflow scheduling based on CPEA further improves the workflow make-span and waiting time.
Workflow Scheduling on Hybrid
Fog-Cloud Environment Based on a Novel Critical Path Extraction Algorithm
Abstract- In the last ten years, the Cloud data centers have been manifested as the crucial computing architectures to enable extreme data workflows. Due to the complicatedness and diverse kinds of computational resources like Fog nodes and Cloud servers, workflow scheduling has been proposed to be the main challenge in Cloud and or Fog computing environments. For resolving this issue, the present study offers a scheduling algorithm according to the critical path extraction, referred to as the Critical Path Extraction Algorithm (CPEA). In fact, it is one of the new multi-criteria decision-making algorithms to extract the critical paths of multiple workflows because it is of high importance to find the critical path in the creation and control of the scheduling. Moreover, an extensive software simulation investigation has been performed for making a comparison between this new algorithm in the real work-loads and recent algorithm. We compare our algorithm with the GRP-HEFT algorithm. The experimental results confirm the proposed algorithm's superiority in fulfilling make-span and waiting time and that workflow scheduling based on CPEA further improves the workflow make-span and waiting time.
Keywords: Cloud-Fog Computing, Critical Path, Distributed Algorithms, Multiple Workflows Scheduling.
I. INTRODUCTION
Resources have been provided for processing part of the request load locally on the nodes by extending Cloud computing by the Fog computing [1], which acts as a near-end computing proxy between the front-end IoT devices and the Cloud servers. According to the studies, researchers have considered Fog computing as one of the crucial needs because of several significant merits like the accessibility of the local low-latency resource allocation [2] as well as mobility support [1, 3]. In addition, large-scale distributed applications are commonly submitted with the specific Quality of Services (QoS) pre-requisites and as the workflows. Furthermore, workflow scheduling entails mapping tasks to the distributed and computational resources with regard to the requirements for quality, including bandwidth, cost, and time [4].
Researchers have presented scheduling algorithms [5] for scientific workflows that are of high importance in the use of the merits of the Fog and Cloud; therefore, they widely examined recently. Due to the presence of numerous tasks and workflows to be processed in every location, scheduling algorithms must enjoy features like scalability and the capability of making immediate decisions. Actually, scheduling algorithms mainly aim to find the most optimal resources in the Fog and Cloud for the end users' workflow [6].
Experts in the field have introduced numerous scheduling algorithms, a majority of which have emphasized the approach to solve the single workflow scheduling problem [7-10] that would not be proper for the current IoT-originated requests. In addition, multiple researchers have presented a lot of approaches [11-15] for addressing the multiple workflows scheduling; though a majority of them have been related the multiple workflows scheduling with the main emphasis on the Cloud environment [5, 16-18] and just some scheduling algorithms emphasized the multiple workflows scheduling in Fog environments [19, 20]. Moreover, IoT node movements have been not taken into account by several earlier investigations [14, 17, 21].
As a result of the mentioned problems, we deal with scheduling multiple workflows, thereby facing two key challenges, including analysis of the multiple workflows introduced concurrently and assignment of multiple workflows to the proper resources for solving the underload and overload problems.
It is widely accepted that the critical path is the longest path from the new task to an entry or exit task. In case of meeting each task of the longest path in a task-dependent scientific workflow with a deadline, the path is known as the critical path. Furthermore, tasks in the critical path prioritize the ones in the noncritical paths [22, 23] that would be very crucial due to the contribution of the critical path to the control and creation of scheduling. Consequently, this diminishes delays, and thus resource application assists in finding the critical tasks, wherein the resources would be utilized.
Several earlier investigations that specified critical tasks calculated a static rank value at the starting point of the scheduling process like CP-PGWO [23], a multi-objective workflow scheduling for Cloud computing with the use of the critical path and Dynamic Critical Path (DCP) algorithm [24], wherein the critical tasks would be chosen and allocated to suitable resources. Numerous algorithms apply a dynamic point of view [25] because it rapidly presents the efficient solutions of the critical path as well as the overall duration for the projects, and thoroughly addresses the possible changes in the critical paths.
We presented a novel algorithm called CPEA. It enjoys the merit of the capability of simultaneous extraction of the schedule multiple workflows and critical path of multiple workflows with the help of the Multi-Criteria Advance Reservation (MCAR) algorithm [22]. According to CPEA, extraction of the critical paths is performed by the CPEA algorithm with regard to the Task Critical Value (TCV), which reveals the criticality of all tasks and is measured by the conditions of resources with the requested resources as well as consideration of the features of DAG. Due to the existence of unexpected and autonomous resources with the possible ability of changes in time as well as possible continual addition of the resources into or removal from the environment without any prior notices in the Fog environments, the CPEA algorithm is employed for dynamic calculation of the TCV of tasks.
The main contributions of this paper are as follows:
· Scheduling multiple workflows simultaneously in different workloads.
· Presenting a novel scheduling scheme, enabling Fog computing as an assistive platform for handling intensive loads on the Cloud.
· Providing a new critical path extraction method that helps characterize the received workflow Directed Acyclic Graph (DAG) more accurately and meaningfully.
The remainder of this paper is organized as follows: In section 2, we overview the advances in the review of the literature. Moreover, section 3 describes an architecture, and section 4 addresses CPEA. Additionally, section 5, deals with the evaluation and experimental outputs of this new CPEA and other scheduling algorithms. Ultimately, section 6 makes conclusions and recommendations for further research.
II. REVIEW OF THE LITERATURE
In [26], a multi-objective workflow scheduling, Pareto-based Grey Wolf Optimizer based on the Critical Path(s) (CP-PGWO) has been proposed in the cloud environment. Before the solution encoding, a topological sort is done on the workflow tasks to generate the order vector according to the relationship between the workflow parents and children and assigns each task with an index from 1 to n, where n is the number of workflow tasks. A solution of CP-PGWO is a sequence encoded by the task indexes that depicts the assignment of workflow tasks to VMs. The simulation shows the make-span improvement by the proposed approach by approximately 68% compared with PGWO (Pareto-based Grey Wolf Optimizer), more accuracy in population sampling for about 70% of workflows, avoidance of the cost increases in more than 50% of workflows.
A Dynamic, Deadline, and Budget-aware, Workflow Scheduling algorithm (DDBWS) is presented in [27] that is designed specifically for the WaaS environments. DDBWS schedules workflows by solving a multi-resource packing problem, as well as considering CPU and memory demands for tasks simultaneously. The results of the performed experiments show that DDBWS achieves at least 96% planning success rate on six different workloads and decreases the total leased VM numbers.
In their study, [23] presented a Multi-Agent System (MAS)-Cloud architecture to monitor and predict the amount of the necessary memory/Central Processing Unit (CPU) and preparation of the computational resources in the Cloud platforms dynamically. MAS-Cloud architecture utilized a recovery and fault detection model for ensuring a significant run of the application.
Then, researchers developed one of the new workflow management frameworks known as DoFCF (Deploy on the Federated Cloud Framework) [28] for enhancing the security of the workflow applications while distributing on a federated Cloud. In fact, the proposed framework may partition the scientific workflows through the data centers of the federated Cloud (public/private) in a dynamic manner in order to diminish the respective costs and satisfy the security needs while resolving the run-time failures. Therefore, researchers have devised a cost model for optimizing the costs of all deployment options. The validity of this new framework has been determined in a Cloud simulation tool (CloudSim) and in a realistic workflow-based Cloud platform (e-Science Central).
In addition, [29] proposed a higher-level workflow architecture that has been service-oriented and Cloud-based. The above conceptual architecture enjoys a structure with the ability to support loose coupling as well as inter-operability of the construction entities according to the Service-oriented Architecture (SoA). Due to the failure of this architecture for meeting the Cloud production objectives, the architecture faced a major problem of the Cloud environment that develops one of the efficacious service discovery mechanisms for determining the best configuration amongst a distinct collection of distributed resources and capacities. We attempted for resolving the above challenge in the Cloud-Fog environment in this new architecture.
Moreover, [30] presented a reference architecture for Cloud WMS. It has been found that the reference architecture has important components like a user interface, enabling the users for the creation, edition, sending, as well as monitoring the respective applications. Hence, the extension of the current workflow systems with Cloud computing may have been followed by the elimination of the need for using physical resources and offering the uses for access to on-demand flexible and scalable infrastructure.
One of the other WMS known as Pegasus has been reported by [31, 32] to manage the workflow implementation on the distributed data and potential computing resources. In addition, Pegasus WMS maps an abstract workflow specification to an executable workflow description that may connect the executive environment to the scientific domain. It is also possible to improve the reliability and functionality of the workflow execution using the above optimizations. Hence, scalability has been introduced as one of the crucial parts of the system design because several Pegasus users possess complex and large workflows.
Moreover, other SWFMSs like Galaxy system [33], TruXy [34], Cloud-bus WMS architecture[30], Airavata [35], iPlant [36], Tavaxy [37], VIEW [38], SciCumulus [39], Taverna [40], Swift [41, 42] as well as Kepler [43] are also presented.
We found that [8] examined the capacity of Fog computing for scheduling scientific workflows with low-latency requirements. Then, researchers presented a Multi-Objective Workflow Optimization (MOWO) algorithm according to the NSGA-II metaheuristic. Nevertheless, this algorithm is readily presented in the Fog environment, though it does not consider different kinds of workflow, the absence of support for node mobility, and inattention to the workflow priority, as its weakness.
In their study, [21] presented one of the prediction-based multi-objective dynamic evolutionary algorithms known as neural network nondominated sorting genetic algorithm-II (NN-DNSGA-II) that combines NSGA-II with the power of the Artificial NN (ANN). Nonetheless, experts have developed this algorithm just by reviewing the NNs strength, and several disadvantages like negligence of the Internet of Things (IoT) node mobility, a priority of workflows, inability to schedule multiple workflows as well as Fog environment.
Moreover, [44] developed one of the job prioritization schemes, which utilizes the Markovian chain stationary probability as one of the measures of job significance. This kind of scheme would keep the precedence order for the jobs having precedence constraints between each other and allocate priorities based on the significance of the jobs for the jobs without the precedence constraints. They also examined the design of a smooth spare budget splitting approach that splits the spare budget evenly across each job. This algorithm enjoys 2 major components: 1. the weighted upward-rank priority scheme that employs the stationary distribution of a random walk on the DAG as the weights and 2. Uniform spare budget splitting. In order to schedule multiple workflows, an extended DAG has been made via the addition of pseudo-entry and exit nodes for connecting several DAGs.
Additionally, [20] presented one of the hybrid Fog and Cloud-aware heuristics for a dynamic schedule of several real-time IoT workflows with regard to the potent communication and computational costs of the task. On the contrary of the conventional strategies wherein the major processing of IoT job is done in the Fog layer, this method seeks for scheduling the computationally demanding tasks with lower communication needs in the Cloud as well as communication-intensive tasks with the lower computational demands in the Fog. Disadvantages of this research are lack of mobility support, inability to schedule multiple workflows, not considering different types of workloads, the priority of workflows, and also critical paths.
Finally, we show other scheduling algorithms like ReRaP [45], FR-MOS [6], CDMWS [16] as well as E-HEFT [46].
Table 1: A brief review of related works
Method/Algorithm | Year | Evaluation metrics | Weaknesses | Advantages |
DDBWS [27] | 2021 | · Total VM numbers · Execution cost · Planning success rate | − Low utilization − No attention to fault tolerance − Doesn’t consider critical paths − Doesn’t consider workflow parallelism | − Considering different types of workloads − Considering deadline and budget − Considering both the CPU and memory demands of tasks − Ability to schedule multiple workflows |
CP-PGWO [26] | 2021 | · Make-span · Cost · Resource utilization | − Doesn’t consider the Fog environment − Doesn’t consider different types of workloads − Considering static task scheduling − Inability to schedule multiple workflows | − Considering critical paths − Considering deadline − Considering task priority |
FR-MOS [6] | 2020 | · Make-span · Cost Ratio · Reliability | − No attention to fault tolerance in a multi-Cloud environment − Doesn’t consider different types of workloads − Doesn’t consider critical paths − Doesn’t consider the Fog environment − Inability to schedule multiple workflows − Doesn’t consider the deadline − Doesn’t consider workflow parallelism | − Considering fuzzy resource utilization − Considering the reliability constraints − Considering failure of cloud platforms − Considering the location and order of executed tasks |
MOWO [24] | 2020 | · Cost Ratio · Reliability · Response time | − Lack of mobility support − No attention to the priority of workflows − Inability to schedule multiple workflows − Doesn’t consider critical paths − Doesn’t consider workflow parallelism | − Considering workloads |
NN-DNSGA-II [22] | 2020 | · Make-span · Cost · Energy · Reliability | − Neglects mobility − Doesn’t consider the Fog environment − Doesn’t consider the priority of workflows − Inability to schedule multiple workflows − Doesn’t consider critical paths − Doesn’t consider the deadline − Doesn’t consider workflow parallelism | − Considering Task workload − Minimizing the energy consumption − Maximizing the reliability and utilization − Considering neural network |
Hybrid-EDF [20] | 2019 | · Deadline Miss Ratio · Tasks percentage executed on Cloud · Total monetary cost | − Lack of mobility support − Doesn’t consider different types of workloads − Inability to schedule multiple workflows − Doesn’t consider the priority of workflows − Doesn’t consider critical paths − Doesn’t consider workflow parallelism | − No attention fault tolerance in a multi-Cloud environment − Dynamic scheduling − Considering different types of workloads − Considering critical paths − Considering the Fog environment − Ability to schedule multiple workflows − Considering the deadline
|
MP [47] | 2016 | · Completion time · Load balancing | − Lack of mobility support − Doesn’t consider different types of workloads − Inability to schedule multiple workflows − Doesn’t consider critical paths − Doesn’t consider the deadline | − Considering intrinsic correlation of tasks and resources information − Utilizing the percentage of resource service times − Considering Synthetic workload |
BAVE, BAVE_M [23] | 2019 | · Make-span · Length ratio | − Lack of mobility support − Doesn’t consider different types of workloads − Inability to schedule multiple workflows − Doesn’t consider the Fog environment − Doesn’t consider the deadline − Doesn’t consider workflow parallelism | − Considering a broad range of workloads − Considering budget constraints − Considering weighted priority scheme − Ability to schedule multiple workflows − Considering critical path |
DSPF* | 2021 | · Waiting time · Workflow parallelism · Make-span | − Doesn’t consider failure handling − Not assigning the priority to workflows − Lack of mobility support
| − Considering different types of workloads − Ability to schedule multiple workflows − Considering the Fog environment − Considering the deadline − Considering workflow parallelism |
*Proposed algorithm
III. THE SCHEDULING ARCHITECTURE
The Multiple workflows Scheduling with the workflow Prioritization and Clustering (MSPC) architecture has been developed in a layered model in a Cloud-Fog environment; therefore, the components can cooperate in the layered hierarchical and distributed architecture. The layers applied in this architecture include Fog Computing Layer (FCL), Cloud Computing Layer (CCL), and Data Generator Layer (DGL). It is possible to describe the layers in this way (Fig. 1) [48]:
1. CCL is on the top, which includes the Cloud DCs consisting of VMs, with three priority queues for all VMs.
2. FCL with numerous Fogs is in the middle, which includes the Fog nodes with an executor executing the workflows.
3. DGL is at the bottom, wherein the workflows are generated.
Fig. 1: The MSPC model with Fog-based approach [48]
The scheduling workflow process starts with the production and introduction of the workflows into the system. Then, the workflows are serviced via their clustering based on the geographical resources. In the next step, the process of the workflow scheduling is continued by an arrangement of the Fogs of the producer geographical locations of all workflows in each cluster that is consistent with their distance from the workflow producer. Moreover, the nearest Fog from the Fog layer is chosen for the execution process of the existing workflow. After that, a proper Fog node in the chosen Fog will be sought for executing that workflow. In the case of finding a proper node, the associated workflow is sent to the executor. In case of detection of any proper Fog node in that Fog, then the workflow is submitted to the other Fogs of the same layer, which considers the distance order (from the nearest to the farthest distance).
In the case of unavailability of adequate resources in the FCL, then, the requested workflow is submitted to the CCL to be processed.
Each DC in CCL is searched for running VM. In case of the presence of a suitable VM to execute the workflow, and in the case of a free VM, the available workflow would be implemented. Nonetheless, in the case of a busy suitable VM, it is necessary to enter that workflow into a VM queue.
Under such conditions, and since this architecture emphasizes the concurrent schedule of multiple workflows in comparison to other architectures, it is necessary to prioritize the workflows. Therefore, this workflow prioritizes the amount of resources needed for execution. After that, the workflow introduces into the related queue of VM concerning its priority. Then, a workflow should be selected from one of the VM queues and submitted to the executor of that VM to be executed.
In case of the absence of a suitable Fog node in the Fog layer and detection of a proper free VM (or VM empty queue) in the Cloud, elimination of that workflow from the scheduling process is observed.
Fig. 2 presents the system activity diagram for showing the major flow of the architecture.
Actually, the diagram illustrates the system process from the generation of a workflow to its removal or execution from the cycle. Firstly, a workflow is created and introduced. In the next step, Workflow Clusterer would cluster each imported workflow based on the geographical site of the creating resources, which sorts each Fog relative to the respective distance from the workflow producer. Then, a dispatcher in the Fog layer begins from the nearest Fog and seeks a Fog node for executing that workflow. In case of discovering this node, the node must be regarded for service ability. In case of identification of a proper node, the associated workflow would be submitted to be executed. Nonetheless, in case of non-identification of a proper node for the workflow execution in that Fog, the dispatcher would continue searching in other Fogs for their intervals.
In case of non-identification of a suitable node is in the Fogs of the Fog layer, the Cloud layer dispatcher would be provided with an option for searching each DC for finding a VM with an empty queue or free VM. When it is found that a VM can execute the workflow but it is busy, this workflow should introduce into a VM queue relative to its priority according to the necessary amount of resources. Such a condition demands the review of the updated table containing the newest state of the VM queues, as well as the workflow prioritization that has been considered the responsibility taken by the prioritizer component. This component allocates a priority to the workflow that takes into account the necessary amount of resources; that is, disk, CPU, workflow size, workflow weight as well as memory for that workflow. In the next step, the dispatcher places the workflow to the empty queue of the chosen VM from the table and thus scheduler component chooses a workflow from the VM queue and submits it to the executor of that VM. When there is not any data center for checking, neither the Cloud layer nor the Fog layer has the essential resources to the service the entered workflow at that time, and in this way, the workflow would be eliminated.
Fig. 2: The main flow of the scheduling system
IV. THE PROPOSED CRITICAL PATH EXTRACTION ALGORITHM (CPEA)
As stated in some studies, the main principle in efficient workflow scheduling has been regarded to be the identification of critical tasks. Several studies [49, 50] have referred to the longest path from an entry task to an exit one (or the longest path from the current task to an exit or entry task) as the CP. Notably, CP generally involves critical tasks. Therefore, researchers have taken into account just communication cost and task computation to determine the critical task in numerous studies, but researchers did not much consider the dynamic nature of Fog/Cloud resources [51, 52]. Put differently, other studies utilized a static ranking strategy for identifying the critical tasks [52, 53]. The static ranking is used to calculate the weight values of the tasks at the initiation of scheduling. Moreover, critical tasks are employed for the identification of CP.
As one of the dynamic ranking approaches, CPEA results in the decisions according to the most current state of resources of the workflow and Fog/Cloud resources instead of the use of a static ranking value, which has been computed at the onset of the scheduling procedure.
In addition, CPEA allocates a TCV as its rank for all tasks of a certain workflow, DAG. It is possible to update this value as the scheduling process for reflecting the newest Fog/Cloud resource state and workflow scheduled task state. Here, we present the parameters utilized by CPEA for measuring the critical tasks in a workflow.
· Communication Cost (Cm i, j): The data transferred between two tasks, ti, and tj, has been referred to as the communication cost, which may be measured in bytes, kilobytes, and so forth. In fact, a task becomes more critical as the communication cost between the child and parent task enhances. Hence, it is necessary to schedule these strongly dependent tasks on the same cluster(s) with the high-speed available communication links.
· Computation Cost Average () : Refers to the computation cost of task ti on the resources. Equation 1 expresses this value.
| (1) |
| (2) |
N |
N |
| (3) |
N |
| (4) |
N |
| (5) |
N |
| (6) |
N |
N |
Fig. 3 shows the flowchart of the CPEA algorithm.
In case of receiving the workflow; that is, DAG, we have taken these steps by CPEA for each
un-scheduled task in that workflow.
Firstly, CPEA achieves each available path of workflow with the help of DFS algorithm. After that, it computes TCV for all un-scheduled tasks in each new path based on Equation 3. Upon the calculation of this value for the mentioned tasks, this algorithm should find the workflow CP. Moreover, CP is extracted by summing TCV of a certain path in the tasks. Therefore, CPEA algorithm selects the maximum TCV summation as the CP. Each DFS path is extracted by critical path calculation algorithm.
Critical Path Average (): It should be noted that one of the strengths of our system has been proposed to be concurrent schedule of multiple workflows. For scheduling numerous workflows at the same time, workflows' CP is specified with regard to the CPEA, and thus Equation 7 is used to calculate CPA:
|
|
| (7) |
Where n stands for the number of workflows that must be concurrently scheduled.
Fig. 4 shows the flowchart of Depth First Search algorithm.
Once the workflow is entered the system, the following steps are performed by the CPE algorithm for all unscheduled tasks in that workflow.
Initially, all workflow paths are calculated using the DFS algorithm. The algorithm then calculates the TCV value for each unscheduled task in all generated paths according to Equation 3. After calculating this value for the tasks, the algorithm must find the CP of the workflow. The CP is extracted from the TCV summary of a specific path in tasks. Therefore, the CPE algorithm selects the highest sum of TCV as CP.
Fig. 3: CPEA flowchart
Fig. 4: Critical path calculation flowchart
As stated in a study in the field, the scheduler component applies an advanced reservation algorithm known as Multi-Criteria Advance Reservation (MCAR) [22] for selecting the most optimal resource that resolves the QoS constraints. This algorithm would schedule a certain task on the cluster resources while concurrently meeting the QoS expectations and reliability.
Potent QoS features have been identified that include execution time, reliability, failure tolerance, as well as availability that can be attracted by the owners of applications in a Cloud environment. Hence, rather than the use of the earliest execution time approach, researchers proposed the MCAR algorithm as a QoS-based heuristic for avoiding biased scheduling. This algorithm utilizes a parameter known as the Quality of Resource (QoR) that identifies the suitability of all resources.
V. EVALUATIONS AND DISCUSSIONS
We addressed the configurations of the simulation experiment. We have used the Velociraptor simulator [54] that is open source. The simulator is implemented in C#. Python is used to implement the LSTM-based prediction component on top of Google Tensorflow and we expose this component as a local flask web service [55] to be consumed by the proposed scheduling code in the simulator. We have used two datasets in our evaluation process, which are available and open source: DS Lab (Distributed Systems Laboratory) abstract job workload [55] and DS Lab detailed job specifications [56]. The source codes are available.
Moreover, the functionality of this new algorithm; that is, CPEA, and GRP-HEFT have been elucidated [57]. Each simulation lasted 300 time-steps, the simulation process is iterated 20 times for all methods, and finally, the average of each experiment would be revealed.
For making an accurate comparison between CPEA and GRP-HEFT, these modifications have been made:
− GRP-HEFT takes into account budget as a constraint. Therefore, the budget constraint has been relaxed due to the fact this study did not address the cost-associated optimizations.
− A support for scheduling on the Fog resources has been added to GRP-HEFT.
− Greedy scheme has been employed in GRP-HEFT.
− First Fit scheme has been applied in the two presented CPEA.
· Configuring the environment
Data released in Facebook in the OpenCompute project [58] has been employed as the source for configuring the new Cloud environment. Therefore, OpenCompute details have been modified in two areas: 1. the number of hosts has been divided into clusters by 10 and 2. The “Cloud provider count” parameter has been added for mimicking a multi-provider Cloud environment. Hence,
Table 2 illustrates the configuration of the Cloud environments. Finally, data reported in Cisco [59] has been utilized for the capacity of the processor concerning Trillion Instructions Per Second (TIPS) and Million Instructions Per Second (MIPS).
Table 2: Cloud environment configuration for simulation experiments
Physical hosts | Network | ||||||
Processor type | Capacity (TIPS) | RAM(GB) | Inter-DS Bandwidth | Datacenter to WAN Bandwidth | Communication Locality | ||
· Xeon 5500 · Xeon 6500 · AMD Ryzen9 · AMD Magny-Cours · AMD Threadripper | 749 - 2,356 | 144-192 | unlimited | 1 Tbps. | 80%-90% | ||
Environment Structure | |||||||
No. of Cloud Providers | No. of Cloud data-centers | Total no. of clusters | Total no. of Cloud hosts | ||||
3 | 21 | 289 | 9350 |
A. Experimental Outputs
Here, we make a comparison between GRP-HEFT [57] and this new CPEA.
· Scheduling Performance: Cloud and or Fog Resource Utilization (RU)
It should be mentioned that resource utilization refers to the amount of machine time utilized to implement the tasks scheduled on it, which would be computed by Equation 8.
|
|
| (8) |
|
|
Table 3 shows resource utilization for both approaches in various workloads.
Table 3: Resource utilization
Method | Utilization of Cloud | |
DSL 35 | DSL 75 | |
CPEA | ~ 50% | ~ 55% |
GRP-HEFT | ~ 46% | ~ 49% |
· Scheduling Performance: Scheduling Success Rate (SSR)
SSR represents the fraction of jobs with the successful schedule and completion. Again, Equation 9 defines SSR, which is studied in various workloads for both approaches.
(9) |
|
· Performance of scheduling: Time-associated parameters
We illustrate two parameters of make-span and waiting time as the time-related parameters for the performance of scheduling.
§ Waiting Time parameter
Accordingly, this parameter represents the average amount of time a job must wait for finishing its scheduling procedure. In this regard, Fig. 5 depicts the waiting time of jobs into two workloads and two methods. With regard to Fig. 5, in case of the use of CPEA in workload DSL-35, ~70% of the workflows must wait 0-3.6 seconds for a complete schedule.
a) The proposed CPEA |
|
|
b) The GRP-HEFT |
|
|
| Workload DSL-35 | Workload DSL-75 |
Fig. 5: Waiting time parameter results report in different workloads
§ Make-span parameter
By definition, make-span refers to the amount of time from a workflow submission to its completion. In Fig. 6, make-span parameter in two distinct workloads has been compared for the introduced CPEA, GRP-HEFT. As an example, 25% of the workflow in the workload DSL-35 needs 66-92 seconds to be completed in case of the use of CPEA.
a) The proposed CPEA |
|
| ||
b) The GRP-HEFT |
|
| ||
| Workload DSL-35 | Workload DSL-75 |
Fig. 6: Make-span parameter results report in different workloads
VI. CONCLUSIONS
Although the use of Cloud/Fog computing has been followed by several merits [60, 61], the main challenge in computing environments has been considered to be workflow scheduling and specifically multiple workflows scheduling. According to the above discussion, the critical path is of high prominence in scheduling; therefore, it is advantageous to find critical paths in the workflow and scheduling tasks on the workflow paths. We proposed one of the multi-criteria decision-making algorithms for extracting the critical path of multiple workflows.
Moreover, we applied intensive software simulation studies for making a comparison between our algorithm and the current GRP-HEFT scheduling algorithm in two workloads with the emphasis on the make-span and waiting time. Finally, experimental outputs confirmed that our method outperforms with regard to the fulfillment of waiting time and make-span.
For future works, we will focus on partitioning bigger DAGs for improving the overall function of scheduling, considering the inherent mobility behavior of IoT and Fog nodes, and assigning a priority value to the workflows for group scheduling. We also will try to compare our algorithm with other benchmark algorithms.
The proposed algorithm has limitations such as the possibility of imposing minor computing overhead for the use of parallelism prediction service and not considering failure handling and mobility that we will try to overcome in future work.
REFERENCES
1. Tychalas, D. and H. Karatza, A Scheduling Algorithm for a Fog Computing System with Bag-of-Tasks Jobs: Simulation and Performance Evaluation. Simulation Modelling Practice and Theory, 2020. 98: p. 101982.
2. Naha, R.K., et al., Fog computing: Survey of trends, architectures, requirements, and research directions. IEEE access, 2018. 6: p. 47980-48009.
3. Kunal, S., A. Saha, and R. Amin, An overview of cloud‐fog computing: Architectures, applications with security challenges. Security and Privacy, 2019. 2(4): p. e72.
4. Donyadari, E., F. Safi-Esfahani, and N. Nourafza, Scientific workflow scheduling based on deadline constraints in cloud environment. Int J Mechatron Electr Comput Technol (IJMEC), 2015. 5(16): p. 1-15.
5. Rodriguez, M.A. and R. Buyya, Scheduling dynamic workloads in multi-tenant scientific workflow as a service platforms. Future Generation Computer Systems, 2018. 79: p. 739-750.
6. Farid, M., et al., Scheduling Scientific Workflow using Multi-objective Algorithm with Fuzzy Resource Utilization in Multi-cloud Environment. IEEE Access, 2020. 8: p. 24309-24322.
7. Abrishami, S., M. Naghibzadeh, and D.H. Epema, Deadline-constrained workflow scheduling algorithms for infrastructure as a service clouds. Future Generation Computer Systems, 2013. 29(1): p. 158-169.
8. De Maio, V. and D. Kimovski, Multi-objective scheduling of extreme data scientific workflows in Fog. Future Generation Computer Systems, 2020. 106: p. 171-184.
9. Zouaoui, S., L. Boussaid, and A. Mtibaa, Priority based round robin (PBRR) CPU scheduling algorithm. International Journal of Electrical & Computer Engineering (2088-8708), 2019. 9(1).
10. Arabnejad, V., K. Bubendorfer, and B. Ng, Budget and deadline aware e-science workflow scheduling in clouds. IEEE Transactions on Parallel and Distributed Systems, 2018. 30(1): p. 29-44.
11. Bittencourt, L.F. and E.R. Madeira, Towards the scheduling of multiple workflows on computational grids. Journal of grid computing, 2010. 8(3): p. 419-441.
12. Li, R., et al. On Scheduling of High-Throughput Scientific Workflows under Budget Constraints in Multi-Cloud Environments. in 2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom). 2018. IEEE.
13. Stavrinides, G.L. and H.D. Karatza, Scheduling multiple task graphs in heterogeneous distributed real-time systems by exploiting schedule holes with bin packing techniques. Simulation Modelling Practice and Theory, 2011. 19(1): p. 540-552.
14. Wang, Y., et al. Fairness scheduling with dynamic priority for multi workflow on heterogeneous systems. in 2017 IEEE 2nd International Conference on Cloud Computing and Big Data Analysis (ICCCBDA). 2017. IEEE.
15. Arabnejad, H. and J. Barbosa. Fairness resource sharing for dynamic workflow scheduling on heterogeneous systems. in 2012 IEEE 10th International Symposium on Parallel and Distributed Processing with Applications. 2012. IEEE.
16. Adhikari, M. and S. Koley, Cloud computing: a multi-workflow scheduling algorithm with dynamic reusability. Arabian Journal for Science and Engineering, 2018. 43(2): p. 645-660.
17. Arabnejad, H. and J.G. Barbosa, Maximizing the completion rate of concurrent scientific applications under time and budget constraints. Journal of computational science, 2017. 23: p. 120-129.
18. Jiang, H.-J., et al. Scheduling concurrent workflows in HPC cloud through exploiting schedule gaps. in International Conference on Algorithms and Architectures for Parallel Processing. 2011. Springer.
19. Ding, R., et al. A cost-effective time-constrained multi-workflow scheduling strategy in fog computing. in International Conference on Service-Oriented Computing. 2018. Springer.
20. Stavrinides, G.L. and H.D. Karatza, A hybrid approach to scheduling real-time IoT workflows in fog and cloud environments. Multimedia Tools and Applications, 2019. 78(17): p. 24639-24655.
21. Ismayilov, G. and H.R. Topcuoglu, Neural network based multi-objective evolutionary algorithm for dynamic workflow scheduling in cloud computing. Future Generation Computer Systems, 2020. 102: p. 307-322.
22. Adabi, S., A. Movaghar, and A.M. Rahmani, Bi-level fuzzy based advanced reservation of Cloud workflow applications on distributed Grid resources. The Journal of Supercomputing, 2014. 67(1): p. 175-218.
23. Doostali, S., S.M. Babamir, and M. Eini, CP-PGWO: multi-objective workflow scheduling for cloud computing using critical path. Cluster Computing, 2021.
24. Yu-Kwong, K. and I. Ahmad, Dynamic critical-path scheduling: an effective technique for allocating task graphs to multiprocessors. IEEE Transactions on Parallel and Distributed Systems, 1996. 7(5): p. 506-521.
25. Liu, D. and C. Hu, A dynamic critical path method for project scheduling based on a generalised fuzzy similarity. Journal of the Operational Research Society, 2021. 72(2): p. 458-470.
26. Doostali, S., S.M. Babamir, and M. Eini, CP-PGWO: multi-objective workflow scheduling for cloud computing using critical path. Cluster Computing, 2021: p. 1-21.
27. Saeedizade, E. and M. Ashtiani, DDBWS: a dynamic deadline and budget-aware workflow scheduling algorithm in workflow-as-a-service environments. The Journal of Supercomputing, 2021: p. 1-40.
28. Wen, Z., et al., Dynamically Partitioning Workflow over Federated Clouds for Optimising the Monetary Cost and Handling Run-Time Failures. IEEE Transactions on Cloud Computing, 2020. 8(4): p. 1093-1107.
29. Kayabay, K., et al. [WiP] A Workflow and Cloud Based Service-Oriented Architecture for Distributed Manufacturing in Industry 4.0 Context. in 2018 IEEE 11th Conference on Service-Oriented Computing and Applications (SOCA). 2018. IEEE.
30. Rodriguez, M.A. and R. Buyya, Scientific workflow management system for clouds, in Software Architecture for Big Data and the Cloud. 2017, Elsevier. p. 367-387.
31. Atkinson, M., et al., Scientific workflows: Past, present and future. 2017, Elsevier.
32. Deelman, E., et al., Pegasus, a workflow management system for science automation. Future Generation Computer Systems, 2015. 46: p. 17-35.
33. Goecks, J., A. Nekrutenko, and J. Taylor, Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences. Genome biology, 2010. 11(8): p. 1-13.
34. Nepal, S., et al., TruXy: Trusted storage cloud for scientific workflows. IEEE transactions on cloud computing, 2015. 5(3): p. 428-442.
35. Pierce, M.E., et al., Apache Airavata: design and directions of a science gateway framework. Concurrency and Computation: Practice and Experience, 2015. 27(16): p. 4282-4291.
36. Merchant, N., et al., The iPlant collaborative: cyberinfrastructure for enabling data to discovery for the life sciences. PLoS biology, 2016. 14(1): p. e1002342.
37. Abouelhoda, M., S.A. Issa, and M. Ghanem, Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support. BMC bioinformatics, 2012. 13(1): p. 77.
38. Lin, C., et al. Service-oriented architecture for VIEW: a visual scientific workflow management system. in 2008 IEEE International Conference on Services Computing. 2008. IEEE.
39. de Oliveira, D., et al. Scicumulus: A lightweight cloud middleware to explore many task computing paradigm in scientific workflows. in 2010 IEEE 3rd International Conference on Cloud Computing. 2010. IEEE.
40. Paterson, T. and A. Law, An XML transfer schema for exchange of genomic and genetic mapping data: implementation as a web service in a Taverna workflow. BMC Bioinformatics, 2009. 10(1): p. 252.
41. Zhao, Y., et al. Swift: Fast, reliable, loosely coupled parallel computation. in 2007 IEEE Congress on Services (Services 2007). 2007. IEEE.
42. Zhao, Y., et al., Enabling scalable scientific workflow management in the Cloud. Future Generation Computer Systems, 2015. 46: p. 3-16.
43. Altintas, I., O. Barney, and E. Jaeger-Frank. Provenance collection support in the kepler scientific workflow system. in International Provenance and Annotation Workshop. 2006. Springer.
44. Zhang, H., et al., Workflow scheduling in the cloud with weighted upward-rank priority scheme using random walk and uniform spare budget splitting. IEEE Access, 2019. 7: p. 60359-60375.
45. Naha, R.K., et al., Deadline-based dynamic resource allocation and provisioning algorithms in fog-cloud environment. Future Generation Computer Systems, 2020. 104: p. 131-141.
46. Samadi, Y., M. Zbakh, and C. Tadonki. E-HEFT: enhancement heterogeneous earliest finish time algorithm for task scheduling based on load balancing in cloud computing. in 2018 International Conference on High Performance Computing & Simulation (HPCS). 2018. IEEE.
47. Li, X., J. Song, and B. Huang, A scientific workflow management system architecture and its scheduling based on cloud service platform for manufacturing big data analytics. The International Journal of Advanced Manufacturing Technology, 2016. 84(1-4): p. 119-131.
48. Davami, F., et al., Fog-based architecture for scheduling multiple workflows with high availability requirement. Computing, 2021: p. 1-40.
49. Ma, T. and R. Buyya. Critical-path and priority based algorithms for scheduling workflows with parameter sweep tasks on global grids. in 17th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD'05). 2005. IEEE.
50. Rahman, M., S. Venugopal, and R. Buyya. A dynamic critical path algorithm for scheduling scientific workflow applications on global grids. in Third IEEE International Conference on e-Science and Grid Computing (e-Science 2007). 2007. IEEE.
51. Kwok, Y.-K. and I. Ahmad, Dynamic critical-path scheduling: An effective technique for allocating task graphs to multiprocessors. IEEE transactions on parallel and distributed systems, 1996. 7(5): p. 506-521.
52. Topcuoglu, H., S. Hariri, and M.-Y. Wu, Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE transactions on parallel and distributed systems, 2002. 13(3): p. 260-274.
53. Dong, F.-P. and S.G. Akl, Distributed double-level workflow scheduling algorithms for grid computing. Journal of Information Technology and Applications (資訊科技與應用期刊), 2007. 1(4): p. 261-273.
54. https://github.com/simulatie-oplossingen/Velociraptor.
55. Rezaee, A., Job workflow (in the form of DAG) specification information for scheduling or prediction. Zenodo, 2021.
56. Rezaee, A.A., Sahar, Jobs (DAG workflow) and tasks dataset with near 50k job instances and 1.3 Millions of tasks. Zenodo, 2020.
57. Faragardi, H.R., et al., GRP-HEFT: A budget-constrained resource provisioning scheme for workflow scheduling in IaaS clouds. IEEE Transactions on Parallel and Distributed Systems, 2019. 31(6): p. 1239-1254.
58. https://www.opencompute.org.
60. Ahmed, A., et al., Fog computing applications: Taxonomy and requirements. arXiv preprint arXiv:1907.11621, 2019.
61. Atlam, H., R. Walters, and G. Wills, Fog computing and the internet of things: a review. big data and cognitive computing, 2018. 2(2): p. 10.