Search Results - (( develop learner optimization algorithm ) OR ( java optimization modified algorithm ))

  • Showing 1 - 14 results of 14
Refine Results
  1. 1

    A conceptual multi-agent framework using ant colony optimization and fuzzy algorithms for learning style detection by Basheer G.S., Ahmad M.S., Tang A.Y.C.

    Published 2023
    “…The multi-agent system applies ant colony optimization and fuzzy logic search algorithms as tools to detecting learning styles. …”
    Conference Paper
  2. 2

    Meta-Heuristic Algorithms for Learning Path Recommender at MOOC by Son, N.T., Jaafar, J., Aziz, I.A., Anh, B.N.

    Published 2021
    “…We have developed Metaheuristic algorithms includes the Genetic Algorithm (GA) and Ant Colony Optimization Algorithm (ACO), to solve the proposed model. …”
    Get full text
    Get full text
    Article
  3. 3

    OPTIMIZED MIN-MIN TASK SCHEDULING ALGORITHM FOR SCIENTIFIC WORKFLOWS IN A CLOUD ENVIRONMENT by Murad S.S., Badeel R., Alsandi N.S.A., Alshaaya R.F., Ahmed R.A., Muhammed A., Derahman M.

    Published 2023
    “…To achieve this, we propose a new noble mechanism called Optimized Min-Min (OMin-Min) algorithm, inspired by the Min-Min algorithm. …”
    Review
  4. 4

    An ensemble deep learning classifier stacked with fuzzy ARTMAP for malware detection by Shing, Chiang Tan, Mohammed Al-Andoli, Mohammed Nasser, Kok, Swee Lim, Pey, Yun Goh, Chee, Peng Lim

    Published 2023
    “…The stacked ensemble method uses several heterogeneous deep neural networks as the base learners. During the training and optimization process, these base learners adopt a hybrid BP and Particle Swarm Optimization algorithm to combine both local and global optimization capabilities for identifying optimal features and improving the classification performance. …”
    Get full text
    Get full text
    Get full text
    Article
  5. 5

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning by Muslim, Much Aziz, Nikmah, Tiara Lailatul, Agustina Pertiwi, Dwika Ananda, Subhan, Subhan, Jumanto, Jumanto, Dasril, Yosza, swanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  6. 6

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning by Muslim, Much Aziz, Nikmah, Tiara Lailatul, Agustina Pertiwi b, Dwika Ananda, Subhan, Subhan, Jumanto, Jumanto, Yosza Dasril, Yosza Dasril, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  7. 7

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning by Muslim, Much Aziz, Nikmah, Tiara Lailatul, Dwika Ananda Agustina Pertiwi, Dwika Ananda Agustina Pertiwi, Subhan, Subhan, Jumanto, Jumanto, Yosza Dasril, Yosza Dasril, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  8. 8

    An ensemble learning method for spam email detection system based on metaheuristic algorithms by Behjat, Amir Rajabi

    Published 2015
    “…In order to address the challenges that mentioned above in this study, in the first phase, a novel architecture based on ensemble feature selection techniques include Modified Binary Bat Algorithm (NBBA), Binary Quantum Particle Swarm Optimization (QBPSO) Algorithm and Binary Quantum Gravita tional Search Algorithm (QBGSA) is hybridized with the Multi-layer Perceptron (MLP) classifier in order to select relevant feature subsets and improve classification accuracy. …”
    Get full text
    Get full text
    Thesis
  9. 9

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning* by Much Aziz Muslim, Much Aziz Muslim, Tiara Lailatul Nikmah, Tiara Lailatul Nikmah, Dwika Ananda Agustina Pertiwi, Dwika Ananda Agustina Pertiwi, Subhan, Subhan, Jumanto, Jumanto, Yosza Dasril, Yosza Dasril, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  10. 10

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning* by Muslim, Much Aziz, Tiara Lailatul Nikmah, Tiara Lailatul Nikmah, Dwika Ananda Agustina Pertiwi b, Dwika Ananda Agustina Pertiwi b, Subhan, Subhan, Jumanto, Jumanto, Yosza Dasril, Yosza Dasril, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  11. 11

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning* by Muslim, Much Aziz, Nikmah, Tiara Lailatul, Dwika Ananda Agustina Pertiwi b, Dwika Ananda Agustina Pertiwi b, Subhan b, Subhan b, Jumanto, Jumanto, Nikmah, Tiara Lailatul, Agustina Pertiwi, Dwika Ananda, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  12. 12

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning* by Muslim, Much Aziz, Tiara Lailatul Nikmah, Tiara Lailatul Nikmah, Dwika Ananda Agustina Pertiwi, Dwika Ananda Agustina Pertiwi, Subhan, Subhan, Jumanto, Jumanto, Yosza Dasril, Yosza Dasril, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  13. 13

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning by Muslim, Much Aziz, Tiara Lailatul Nikmah, Tiara Lailatul Nikmah, Dwika Ananda Agustina Pertiwi, Dwika Ananda Agustina Pertiwi, Subhan, Subhan, Jumanto, Jumanto, Yosza Dasril, Yosza Dasril, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  14. 14

    Evolutionary cost-cognizant regression test case prioritization for object-oriented programs by Bello, AbdulKarim

    Published 2019
    “…Afterward evolutionary algorithm (EA) was employed to prioritize test cases based on the rate severity of fault detection per unit test cost. …”
    Get full text
    Get full text
    Thesis