Search Results - (( developing learner optimization algorithm ) OR ( java implementation max algorithm ))
Search alternatives:
- learner optimization »
- java implementation »
- developing learner »
- implementation max »
- max algorithm »
-
1
OPTIMIZED MIN-MIN TASK SCHEDULING ALGORITHM FOR SCIENTIFIC WORKFLOWS IN A CLOUD ENVIRONMENT
Published 2023“…To achieve this, we propose a new noble mechanism called Optimized Min-Min (OMin-Min) algorithm, inspired by the Min-Min algorithm. The objectives of this work are: i) to provide a comprehensive review of the cloud and scheduling process; ii) to classify the scheduling strategies and scientific workflows; iii) to implement our proposed algorithm with various scheduling algorithms (i.e., Min-Min, Round-Robin, Max-Min, and Modified Max-Min) for performance comparison, within different cloudlet sizes (i.e., small, medium, large, and heavy) in three scientific workflows (i.e., Montage, Epigenomics, and SIPHT); and iv) to investigate the performance of the implemented algorithms by using CloudSim. …”
Review -
2
Batch mode heuristic approaches for efficient task scheduling in grid computing system
Published 2016“…Many algorithms have been implemented to solve the grid scheduling problem. …”
Get full text
Get full text
Get full text
Thesis -
3
A conceptual multi-agent framework using ant colony optimization and fuzzy algorithms for learning style detection
Published 2023“…The multi-agent system applies ant colony optimization and fuzzy logic search algorithms as tools to detecting learning styles. …”
Conference Paper -
4
Meta-Heuristic Algorithms for Learning Path Recommender at MOOC
Published 2021“…We have developed Metaheuristic algorithms includes the Genetic Algorithm (GA) and Ant Colony Optimization Algorithm (ACO), to solve the proposed model. …”
Get full text
Get full text
Article -
5
An ensemble deep learning classifier stacked with fuzzy ARTMAP for malware detection
Published 2023“…The stacked ensemble method uses several heterogeneous deep neural networks as the base learners. During the training and optimization process, these base learners adopt a hybrid BP and Particle Swarm Optimization algorithm to combine both local and global optimization capabilities for identifying optimal features and improving the classification performance. …”
Get full text
Get full text
Get full text
Article -
6
New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning
Published 2023“…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
Get full text
Get full text
Get full text
Article -
7
New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning
Published 2023“…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
Get full text
Get full text
Get full text
Article -
8
New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning
Published 2023“…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
Get full text
Get full text
Get full text
Article -
9
New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning*
Published 2023“…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
Get full text
Get full text
Get full text
Article -
10
New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning*
Published 2023“…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
Get full text
Get full text
Get full text
Article -
11
New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning*
Published 2023“…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
Get full text
Get full text
Get full text
Article -
12
New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning*
Published 2023“…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
Get full text
Get full text
Get full text
Article -
13
New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning
Published 2023“…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
Get full text
Get full text
Get full text
Article -
14
SG-PBFS : Shortest Gap-Priority Based Fair Scheduling technique for job scheduling in cloud environment
Published 2024“…To conduct this experiment, we employed the CloudSim simulator, which is implemented using the Java programming language.…”
Get full text
Get full text
Get full text
Get full text
Article
