DETEKSI OTOMATIS NOMINAL UANG KERTAS RUPIAH UNTUK TUNANETRA MENGGUNAKAN ALGORITMA ARSITEKTUR SSD MOBIILENETV3

##plugins.themes.academic_pro.article.main##

Ario Prima
Dian Budhi Santoso
Lela Nurpulaela

Abstract

Money is a legal tender in the buying and selling transactions of society. However, visually impaired people face visual limitations that affect their ability to recognize the value of Rupiah banknotes. Although Rupiah banknotes already have a blind code feature, this is often ineffective due to the condition of the banknotes. In this study, we use Artificial Intelligence technology, especially deep learning, to help visually impaired people recognize the value of Rupiah banknotes more easily. Our system is built using Convolutional Neural Network (CNN) technique with MobileNetV3 architecture and Single Shot Multibox Detector (SSD) algorithm. The results show that our system is able to operate well under various lighting conditions, including daytime and nighttime. Under sufficient lighting conditions, the system achieves an accuracy of between 80% to 95%. However, we acknowledge that in low-light or nighttime conditions, our system has problems detecting banknotes. Thus, this research contributes to the effort of improving the accessibility of visually impaired people in recognizing Rupiah banknotes, although we also recognize the need for more attention to address low lighting conditions in future development.

##plugins.themes.academic_pro.article.details##

How to Cite
Prima, A., Santoso, D. B. ., & Nurpulaela, L. (2022). DETEKSI OTOMATIS NOMINAL UANG KERTAS RUPIAH UNTUK TUNANETRA MENGGUNAKAN ALGORITMA ARSITEKTUR SSD MOBIILENETV3. TEKNOKOM, 6(2), 151–159. https://doi.org/10.31943/teknokom.v6i2.166

References

  1. Nugroho Rastu Wahyu, “Deteksi Nominal Uang Kertas Berbasis Deep Learning Dengan Voice Feedback Menggunakan Raspberry PI 3B,” Universitas Islam Negeri Syarif Hidayatullah, Jakarta, 2022.
  2. W. Dadang, “Memahami kecerdasan Buatan berupa deep learning dan machine learning,” in 10. Prosiding industrial research workshop and National Seminar, 2018.
  3. M. Grandini, E. Bagli, and G. Visani, “Metrics for multi-class classification: an overview,” arXiv preprint arXiv:2008.05756, 2020.
  4. W. M. Baihaqi, F. Sulistiyana, and A. Fadholi, “PENGENALAN ARTIFICIAL INTELLIGENCE UNTUK SISWA DALAM MENGHADAPI DUNIA KERJA DI ERA REVOLUSI INDUSTRI 4.0,” RESWARA: Jurnal Pengabdian Kepada Masyarakat, vol. 2, no. 1, pp. 79–88, 2021.
  5. Subakti Hani et al., Artificial Intelligece. Bandung: Media Sains Indonesia, 2022.
  6. M. Elgendy, Deep learning for vision systems. Simon and Schuster, 2020.
  7. K. Wilianto, “Evaluation Metrics pada Computer Vision dari Klasifikasi hingga Deteksi Objek,” Data Folks Indonesia, Oct. 2021.
  8. M. Grandini, E. Bagli, and G. Visani, “Metrics for multi-class classification: an overview,” arXiv preprint arXiv:2008.05756, 2020.
  9. P. Arfienda, “Materi Pendamping Memahami Convolutional Neural Networks Dengan Tensorflow.” 2019.
  10. A. Howard et al., “Searching for mobilenetv3,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1314–1324.
  11. C. G. W. Pramana, D. C. Khrisne, and N. P. Sastra, “Rancang Bangun Object Detection Pada Robot Soccer Menggunakan Metode Single Shot Multibox Detector (SSD MobileNetV2),” Jurnal SPEKTRUM Vol, vol. 8, no. 2, 2021.
  12. Z.-Q. Zhao, P. Zheng, S. Xu, and X. Wu, “Object detection with deep learning: A review,” IEEE Trans Neural Netw Learn Syst, vol. 30, no. 11, pp. 3212–3232, 2019.
  13. Y. Xiong et al., “Mobiledets: Searching for object detection architectures for mobile accelerators,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 3825–3834.
  14. D. Manajang, S. R. U. A. Sompie, and A. Jacobus, “Implementasi Framework Tensorflow Object Detection API Dalam Mengklasifikasi Jenis Kendaraan Bermotor,” Jurnal Teknik Informatika, vol. 15, no. 3, pp. 171–178, 2021.
  15. S. R. Dewi, “Deep Learning Object Detection Pada video menggunakan tensorflow dan convolutional neural network,” 2018.