Search Results - (( developing learner optimization algorithm ) OR ( java implementation bayes algorithm ))

  • Showing 1 - 13 results of 13
Refine Results
  1. 1

    Embedded system for indoor guidance parking with Dijkstra’s algorithm and ant colony optimization by Mohammad Ata, Karimeh Ibrahim

    Published 2019
    “…In this system, the layout was designed under two categories which are the standard bay size and the small bay size to increase the parking bays. …”
    Get full text
    Get full text
    Thesis
  2. 2

    Adoption of machine learning algorithm for analysing supporters and non supporters feedback on political posts / Ogunfolajin Maruff Tunde by Ogunfolajin Maruff , Tunde

    Published 2022
    “…The support vector machines (SVM) algorithm obtained the overall best results of 94.5% accuracy, 91.8% precision, 91.7% recall, and 91.1% f-Measure while the naïve bayes (NB) algorithm obtained the best AUC score of 0.944 with the tweet data of Dato Seri Anwar. …”
    Get full text
    Get full text
    Get full text
    Thesis
  3. 3

    A conceptual multi-agent framework using ant colony optimization and fuzzy algorithms for learning style detection by Basheer G.S., Ahmad M.S., Tang A.Y.C.

    Published 2023
    “…The multi-agent system applies ant colony optimization and fuzzy logic search algorithms as tools to detecting learning styles. …”
    Conference Paper
  4. 4

    Meta-Heuristic Algorithms for Learning Path Recommender at MOOC by Son, N.T., Jaafar, J., Aziz, I.A., Anh, B.N.

    Published 2021
    “…We have developed Metaheuristic algorithms includes the Genetic Algorithm (GA) and Ant Colony Optimization Algorithm (ACO), to solve the proposed model. …”
    Get full text
    Get full text
    Article
  5. 5

    An ensemble deep learning classifier stacked with fuzzy ARTMAP for malware detection by Shing, Chiang Tan, Mohammed Al-Andoli, Mohammed Nasser, Kok, Swee Lim, Pey, Yun Goh, Chee, Peng Lim

    Published 2023
    “…The stacked ensemble method uses several heterogeneous deep neural networks as the base learners. During the training and optimization process, these base learners adopt a hybrid BP and Particle Swarm Optimization algorithm to combine both local and global optimization capabilities for identifying optimal features and improving the classification performance. …”
    Get full text
    Get full text
    Get full text
    Article
  6. 6

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning by Muslim, Much Aziz, Nikmah, Tiara Lailatul, Agustina Pertiwi, Dwika Ananda, Subhan, Subhan, Jumanto, Jumanto, Dasril, Yosza, swanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  7. 7

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning by Muslim, Much Aziz, Nikmah, Tiara Lailatul, Agustina Pertiwi b, Dwika Ananda, Subhan, Subhan, Jumanto, Jumanto, Yosza Dasril, Yosza Dasril, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  8. 8

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning by Muslim, Much Aziz, Nikmah, Tiara Lailatul, Dwika Ananda Agustina Pertiwi, Dwika Ananda Agustina Pertiwi, Subhan, Subhan, Jumanto, Jumanto, Yosza Dasril, Yosza Dasril, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  9. 9

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning* by Much Aziz Muslim, Much Aziz Muslim, Tiara Lailatul Nikmah, Tiara Lailatul Nikmah, Dwika Ananda Agustina Pertiwi, Dwika Ananda Agustina Pertiwi, Subhan, Subhan, Jumanto, Jumanto, Yosza Dasril, Yosza Dasril, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  10. 10

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning* by Muslim, Much Aziz, Tiara Lailatul Nikmah, Tiara Lailatul Nikmah, Dwika Ananda Agustina Pertiwi b, Dwika Ananda Agustina Pertiwi b, Subhan, Subhan, Jumanto, Jumanto, Yosza Dasril, Yosza Dasril, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  11. 11

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning* by Muslim, Much Aziz, Nikmah, Tiara Lailatul, Dwika Ananda Agustina Pertiwi b, Dwika Ananda Agustina Pertiwi b, Subhan b, Subhan b, Jumanto, Jumanto, Nikmah, Tiara Lailatul, Agustina Pertiwi, Dwika Ananda, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  12. 12

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning* by Muslim, Much Aziz, Tiara Lailatul Nikmah, Tiara Lailatul Nikmah, Dwika Ananda Agustina Pertiwi, Dwika Ananda Agustina Pertiwi, Subhan, Subhan, Jumanto, Jumanto, Yosza Dasril, Yosza Dasril, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article
  13. 13

    New model combination meta-learner to improve accuracy prediction P2P lending with stacking ensemble learning by Muslim, Much Aziz, Tiara Lailatul Nikmah, Tiara Lailatul Nikmah, Dwika Ananda Agustina Pertiwi, Dwika Ananda Agustina Pertiwi, Subhan, Subhan, Jumanto, Jumanto, Yosza Dasril, Yosza Dasril, Iswanto, Iswanto

    Published 2023
    “…The SMOTE method is used to balance the data, the feature selection LightGBM and stacking ensemble learning (LGBFS-StackingXGBoost) to optimize machine learning accuracy. A new model of stacking ensemble learning by combining three base-learner algorithms namely KNN, SVM and Random Forest into the XGBoost meta-learner algorithm. …”
    Get full text
    Get full text
    Get full text
    Article