Review: Algoritma Optimasi

Authors

  • Imam Yunianto Institut Bisnis Muhammadiyah Bekasi
  • Yunus Fadhilah S

DOI:

https://doi.org/10.52661/j_ict.v5i2.210

Keywords:

Optimization, Algorithm. Algorithm Optimization..

Abstract

Abstract

Optimization is needed by many people when there are two different parameters that must be achieved in conflict with each other, such as when a fisherman wants to get a large catch of fish with minimal operational costs. This is very confusing because if you want to get a lot of results, you have to spend a lot of operational costs too. However, getting large results and keeping operational costs low is a challenge for everyone. For this reason, there is a lot of research that discusses how to optimize algorithms to get the best results. This research reviews the optimization methods used to achieve that goal with different algorithms and different results. The results of this journal review are four methods in optimization research.

 

References

Suyanto ; Kurniawan Nur Ramadhani ; Satria Mandala, DEEP LEARNING Modernisasi Machine Learning untuk Big Data, Pertama. 2019.

“Kamus Besar Bahasa Indonesia,” 2023. https://kbbi.web.id/optimum (accessed Apr. 30, 2023).

F. Group, Optimization © 2015. 2015.

“Web Kementerian Keuangan,” 2023. https://www.djkn.kemenkeu.go.id/artikel/baca/14300/Implementasi-Pareto-Efficiency-Dalam-Perencanaan-Kebutuhan-dan-Pengadaan-Tanah-Untuk-Kepentingan-Umum.html (accessed Apr. 30, 2023).

O. T. Kosmas and D. S. Vlachos, “Simulated annealing for optimal ship routing,” Comput. Oper. Res., vol. 39, no. 3, pp. 576–581, 2012, doi: 10.1016/j.cor.2011.05.010.

P. Rokhforoz, M. Montazeri, and O. Fink, “Multi-agent reinforcement learning with graph convolutional neural networks for optimal bidding strategies of generation units in electricity markets,” Expert Syst. Appl., vol. 225, no. February, p. 120010, 2023, doi: 10.1016/j.eswa.2023.120010.

F. Rahimi and S. Ziaei, “Reinforcement learning-based optimised control for tracking of nonlinear systems with adversarial attacks,” no. ICRoM, 2022, [Online]. Available: http://arxiv.org/abs/2209.02165.

“d y u d i ma n.”

T. H. Zhe Wang, “Reinforcement Learning for Building Controls: The opportunities and challenges.”

X. Wu, Y. Liu, X. Zhao, and J. Chen, “STKST-I: An Efficient Semantic Trajectory Search by Temporal and Semantic Keywords,” Expert Syst. Appl., vol. 225, no. April, p. 120064, 2023, doi: 10.1016/j.eswa.2023.120064.

E. Osekowska, H. Johnson, and B. Carlsson, “Maritime vessel traffic modeling in the context of concept drift,” Transp. Res. Procedia, vol. 25, pp. 1457–1476, 2017, doi: 10.1016/j.trpro.2017.05.173.

C. Zhou et al., “The review unmanned surface vehicle path planning: Based on multi-modality constraint,” Ocean Eng., vol. 200, 2020, doi: 10.1016/j.oceaneng.2020.107043.

K. Chen, H. Wang, B. Valverde-Pérez, S. Zhai, L. Vezzaro, and A. Wang, “Optimal control towards sustainable wastewater treatment plants based on multi-agent reinforcement learning,” Chemosphere, vol. 279, 2021, doi: 10.1016/j.chemosphere.2021.130498.

L. Giannelli et al., “A tutorial on optimal control and reinforcement learning methods for quantum technologies,” Phys. Lett. Sect. A Gen. At. Solid State Phys., vol. 434, 2022, doi: 10.1016/j.physleta.2022.128054.

L. Henesey, Multi-Agent Container Terminal Management. 2006.

T. N. Adi, H. Bae, and Y. A. Iskandar, “Interterminal truck routing optimization using cooperative multiagent deep reinforcement learning,” Processes, vol. 9, no. 10, 2021, doi: 10.3390/pr9101728.

C. Wang, L. Wei, Z. Wang, M. Song, and N. Mahmoudian, “Reinforcement learning-based multi-AUV adaptive trajectory planning for under-ice field estimation,” Sensors (Switzerland), vol. 18, no. 11, pp. 1–19, 2018, doi: 10.3390/s18113859.

A. Sabra and W. K. Fung, “A fuzzy cooperative localisation framework for underwater robotic swarms,” Sensors (Switzerland), vol. 20, no. 19, pp. 1–24, 2020, doi: 10.3390/s20195496.

H. Yuan, Z. Sun, Y. Wang, and Z. Chen, “Deep Reinforcement Learning Algorithm Based on Fusion Optimization for Fuel Cell Gas Supply System Control,” World Electr. Veh. J., vol. 14, no. 2, 2023, doi: 10.3390/wevj14020050.

J. Liu, G. Shi, K. Zhu, and J. Shi, “Research on MASS Collision Avoidance in Complex Waters Based on Deep Reinforcement Learning,” 2023.

H. Shen, H. Hashimoto, D. Terada, and C. Guo, “Automatic collision avoidance of multiple ships based on deep Q-learning,” Appl. Ocean Res., vol. 86, no. March, pp. 268–288, 2019, doi: 10.1016/j.apor.2019.02.020.

X. Chen, Y. Liu, K. Achuthan, X. Zhang, and J. Chen, “A Semi-Supervised Deep Learning Model for Ship Encounter Situation Classification.”

G. Duan, T. Fan, X. Chen, L. Chen, and J. Ma, “A hybrid algorithm on the vessel routing optimization for marine debris collection,” Expert Syst. Appl., vol. 182, no. April, p. 115198, 2021, doi: 10.1016/j.eswa.2021.115198.

M. J. Dreyfus-leo, “Individual-based modelling of fishermen search behaviour with neural networks and reinforcement learning,” vol. 120, pp. 287–297, 1999.

S. Avalos and J. M. Ortiz, “Fundamentals of deep Q-Learning,” vol. 1, no. 1, pp. 14–21, 2021.

A. S. A. Algorithm, “A New Q-Learning Algorithm Based,” vol. 34, no. 5, pp. 2140–2143, 2004.

J. Ho, D. W. Engels, and S. E. Sarma, “HiQ : A Hierarchical Q-Learning Algorithm to Solve the Reader Collision Problem,” 2005.

B. Jang and M. Kim, “Q-Learning Algorithms : A Comprehensive Classification and Applications,” IEEE Access, vol. 7, pp. 133653–133667, 2019, doi: 10.1109/ACCESS.2019.2941229.

H. Van Hasselt, A. C. Group, and C. Wiskunde, “Double Q-learning,” pp. 1–9.

Y. Wang, T. S. Li, and C. Lin, “Engineering Applications of Arti fi cial Intelligence Backward Q-learning : The combination of Sarsa algorithm and,” Eng. Appl. Artif. Intell., vol. 26, no. 9, pp. 2184–2193, 2013, doi: 10.1016/j.engappai.2013.06.016.

X. Yang, X. Yang, and X. Deng, “Horizontal trajectory control of stratospheric airships in wind field using Q-learning algorithm,” Aerosp. Sci. Technol., vol. 106, p. 106100, 2020, doi: 10.1016/j.ast.2020.106100.

A. Maoudj and A. Hentout, “Optimal path planning approach based on Q-learning algorithm for mobile robots,” Appl. Soft Comput. J., p. 106796, 2020, doi: 10.1016/j.asoc.2020.106796.

C. Yan and X. Xiang, “A Path Planning Algorithm for UAV Based on Improved Q-Learning,” 2018 2nd Int. Conf. Robot. Autom. Sci., pp. 1–5, 2018.

C. G. Com, “Continuous Deep Q-Learning with Model-based Acceleration.”

H. P. Tauqir and A. Habib, “Deep Learning Based Beam Allocation in Switched-Beam Multiuser Massive Deep Learning Based Beam Allocation in Switched-Beam Multiuser Massive MIMO Systems,” no. July, 2021, doi: 10.1109/INTELLECT47034.2019.8955466.

Z. Yijing, Z. Zheng, Z. Xiaoyi, and L. I. U. Yang, “Q learning algorithm based UAV path learning and obstacle avoidence approach,” pp. 3397–3402, 2017.

Q. Shi, H. Lam, B. Xiao, and S. Tsai, “Adaptive PID controller based on Q -learning algorithm,” doi: 10.1049/trit.2018.1007.

B. Q. Algorithm, “B AT Q- LEARNING A LGORITHM,” no. April, 2017, doi: 10.5455/jjcit.71-1480540385.

R. J. Williams, Incremental Multi-Step Q-Learning, no. x. Morgan Kaufmann Publishers, Inc.

D. E. L. R. M. An, “Continuous-Action Q-Learning,” pp. 247–265, 2002.

H. Van Hasselt, A. Guez, and D. Silver, “Deep Reinforcement Learning with Double Q-Learning,” pp. 2094–2100.

D. M. Wiharta et al., “IDENTIFIKASI AKTIVITAS ILLEGAL TRANSSHIPMENT BERBASIS,” vol. 5, no. 1, pp. 38–46, 2022.

A. Hasad, Y. Hanafi, and R. Sartika, “Desain Perilaku Home Position Pada Robot Soccer Dan Kursi Bergerak Otonom Menggunakan Algoritma Q-Learning,” vol. 8, no. 1, pp. 35–50, 2020.

Y. V. Via, “OPTIMASI PENCAPAIAN TARGET PADA SIMULASI PERENCANAAN JALUR ROBOT BERGERAK DI LINGKUNGAN DINAMIS.”

Q. Learning, “Pencarian Rute Line Follower Mobile Robot Pada Maze Dengan Metode Q Learning,” vol. 8, no. 1, pp. 55–66, 2016.

Downloads

Published

2023-12-26

How to Cite

Yunianto, I., & Fadhilah S, Y. (2023). Review: Algoritma Optimasi. Journal of Informatics and Communication Technology (JICT), 5(2), 111–125. https://doi.org/10.52661/j_ict.v5i2.210

Issue

Section

Informatika