Welcome to CNSA 2025

18th International Conference on Network Security & Applications (CNSA 2025)

January 25 ~ 26, 2025, Copenhagen, Denmark



Accepted Papers
Grounding Large Language Models in Knowledge and Reason

Ram Eshwar Kaundinya, Drexel University, Philadelphia PA 19104, USA

ABSTRACT

Large language models (LLMs) have led to a leap in generative AI capability with human-like language production across vast domains. While this has been a stunning success in some respects, it has highlighted many of the limitations of a purely connectionist approach to AI. LLMs are not sufficiently grounded in a knowledge base exacerbating problems with reasoning and planning. Fine-tuning and RAG (Retrieval Augmented Generation), are not robust and are limited. This paper takes a symbolic approach to online learning within a trading game environment. I introduce two novel ideas - a memory module based on ideas from cognitive architectures such as ACT-R and a symbolic knowledge graph. This allows for online learning within a dynamic game environment. The novel architecture is generalizable and allows for grounding LLMs in a desired domain without extensive fine-tuning or RAG, enabling the creation of personalized LLM systems.

Keywords

Large Language Models, Cognitive Architecture, Neurosymbolic Computing.


From Rigid Robo-advisors to Human-like Interactions: Revolutionizing Financial Assistance with Llm-powered Solutions

Hamza Landolsi1, Ines Abdeljaoued-Tej1, 2, 1Engineering School of Statistics and Information Analysis, University of Carthage, Ariana, Tunisia,2Laboratory of BioInformatics bioMathematics, and bioStatistics (LR24IPT09), Institut Pasteur de Tunis, University of Tunis El Manar, 13, place Pasteur, B.P. 74, Belv´ed`ere, 1002, Tunis, Tunisia

ABSTRACT

Generative Artificial Intelligence (GenAI) is revolutionizing the business world by increasing availability, efficiency, cost reduction, and innovation. This paper explores the application of Large Language Models (LLMs) and GenAI to finance. It proposes a novel framework on how we can imagine robo-advisory systems, from a traditional rigid platform to a more humanized solution that further engages the investor in a hand-picking asset selection process and better understands their goals and profile using LLMs. We designed an end-to-end solution to overcome many limitations such as lack of flexibility in robo-advisors, lack of possible asset types (usually only equities) and the problem of real-time access to high quality data. The solution architecture includes dynamic client profiling, risk aversion estimation and portfolio optimization. It tailored asset selector agent using robust data pipelines to curate the latest market information. Through iterative development, we employed prompt engineering and multi-agent workflows to enhance user interactions and deliver meaningful insights. By developing an innovative chatbot platform, we demonstrate the potential of LLMs to transform customer service, increase engagement, and provide strategic financial advice.

Keywords

Generative AI, Large Language Models (LLM), Big Data, Practical Applications, Agentic Design Patterns, Finance, Investment analysis, Portfolio Optimization


Survey: Understand the Challenges of Machine Learning Experts using Named Entity Recognition Tools

Florian Freund , Philippe Tamla , and Matthias Hemmje University of Hagen, Faculty of Mathematics and Computer Science 58097 Hagen, Germany

ABSTRACT

Comparison and selection of Named Entity Recognition (NER) tools and frameworks is a critical step in leveraging NER for Information Retrieval to support the development of Clinical Practice Guidelines. This paper presents a survey based on Kasunic’s survey research methodology to identify the criteria used by Machine Learning (ML) experts to evaluate NER tools and their significance in the selection process. In addition, it examines the main challenges faced by ML experts when choosing suitable NER tools and frameworks. Using Nunamaker’s methodology, the article begins with an introduction to the topic, contextualizes the research, reviews the state of the art in science and technology, and identifies challenges for an expert survey on NER tools and frameworks. This is followed by a description of the survey’s design and implementation. The paper concludes with an evaluation of the survey results and the insights gained, ending with a summary and conclusions.

Keywords

Expert Survey, Natural Language Processing, Named Entity Recognition, Machine Learning, Cloud Computing.


Figurative Style Classification in Arabic Texts Using MT5-based Pre-trained Language Models

Zouheir BANOU, Sanaa EL FILALI, El Habib BENLAHMAR, Laila ELJIANI, and Fatima-Zahra ALAOUI, Faculty of Sciences Ben M’Sik – Hassan II University, Bd Commandant Driss Al Harti, 7955, Casablanca, Morocco

ABSTRACT

Abstract. Figurative language detection is a challenging task in natural language processing (NLP), especially for morphologically rich languages like Arabic. This study investigates the effectiveness of pre-trained language models (PLMs) for detecting hyperbole and metaphor in Arabic, comparing general-purpose models of varying sizes (mT5-Small, mT5-Base, and mT5-Large) with a specialized, fine-tuned model (MMFLD) trained specifically for figurative language tasks. Results indicate that while larger models such as mT5-Large excel in capturing complex figurative expressions, the task-specific MMFLD model achieves competitive performance, especially in metaphor detection. This highlights the benefits of both model size and specialized training in figurative language tasks.

Keywords

Figurative Language Detection, Arabic NLP, Pre-trained Language Models (PLMs).


The Impact of Iot on the Modern World a Review and Evaluation Study

Heidrich Vicci, College of Business, Florida International University, USA

ABSTRACT

The IoT provides users with dynamic and rich context-aware services that are highly responsive to the user needs. The users can remotely monitor and control the environment. The IoT will allow more direct integration of the physical world into computer-based systems, resulting in improved accuracy, efficiency, and economic benefit in addition to reduced human intervention. The basic premise is to have objects or things working for humans rather than humans working for them. As we can see, the usage of the term follows a similar trend in both economics and search engine results, which could be a good indicator of the existence of a correlation between the two. More than 16 years after its inception, technology remains a hot topic both in the business world and in academia. (Jagarlamudi et al.2022)(Pradeep et al.2021)(Pradeep et al.2021)

Keywords

Internet of Things (IoT), computer-based systems, human intervention.


An Intelligent Tracking System to Analyze Shooting Angles Compared to Nba Players using AI and Machine Learning

Zhixiang Zhang1, Ang Li2, 1Ruben s Ayala High School, 14255 Peyton Dr, Chino Hills, CA 91709, 2California State University, Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840

ABSTRACT

This project aims to solve the problem of providing real-time, personalized feedback on basketball shooting form using machine learning (ML). By comparing a user’s body angles during their shot to those of professional players, the program delivers tailored suggestions for improvement. The core technologies used include pose detection through computer vision and a machine learning model that analyzes and compares joint angles. Challenges included fine-tuning the model’s confidence score to ensure accurate comparisons between users and pros, handling image quality issues, and providing clear feedback to users of different skill levels. The experiments showed that when professional players were compared to themselves, the system returned very high similarity scores, confirming the model’s accuracy. The project stands out because of its personalized feedback feature, helping both beginner and advanced users improve their shooting form. By addressing common limitations such as image quality and skill variability, this tool offers a unique solution for athletes looking to refine their performance.

Keywords

Basketball Analyze, Machine Learning Comparison, Computer Vision.


Optimizing Deep Learning Models for Osteoporosis Detection: a Case Study on Knee X-ray Images Using Transfer Learning

Zahraa Shams Alden1, 2 and Oguz Ata3, 1University of Altinbas, Electrical and Computer Engineering, Turkey, 2University of Kerbala, Tourism Science, Iraq, 3University of Altinbas, Information Technology, Turkey

ABSTRACT

The analysis of medical images is a very risen area of study and the speed and precision necessary in medical image analysis. Deep learning may aid in resolving medical image processing issues including labelled datasets by experts to learn effectively. This can be difficult to achieve in the medical field, where access to large amounts of labeled data may be limited. Another challenge is the complexity of medical data. Therefore, this study proposed a deep neural network-based model for medical imaging to detect osteoporosis using transfer learning with MobileNetV2. Class weights are used to alleviate class imbalance, and the learning rate schedule improves model adaptability. The model was created in two variants: one with a learning rate schedule and class weights with an accuracy of 96%, and the second model with only a learning rate schedule with an accuracy of 94%. The anticipated experimental results should illustrate the efficiency of the proposed framework for the future designing of deep learning models for predicting bone fracture and speeding up medical data analysis and interpretation.

Keywords

Medical image analysis, Machine learning, CNN, Transfer Learning, Osteoporosis, Deep learning, MobileNetV2.


Performance Evaluation of Mobility in Non-terrestrial Networks

Li-Sheng Chen1 and Shu-Han Liao2, 1Department of Computer Science and Information Engineering, National Ilan University, Ilan, 260007, R.O.C., 2Department of Electrical Engineering, Tamkang University, New Taipei, 251301 , R.O.C.

ABSTRACT

Low Earth Orbit (LEO) satellites exhibit high mobility, leading to frequent handover challenges. Addressing these handover issues is crucial for maintaining seamless and stable service connections. In this paper, we tackle the handover problems in LEO by utilizing the D1 events, as discussed in the 3rd Generation Partnership Project (3GPP). Unlike terrestrial networks, the difference between the reference signal received power (RSRP) at the cell edge and the cell center is minimal in non-terrestrial networks (NTN). Therefore, 3GPP has been exploring location-based handover methods using absolute thresholds instead of comparing the RSRP of serving and neighboring cells in handover events. We introduce the D1 event as a handover trigger and explore handover parameters in conjunction with the UE’s position (referred to as enhanced D1) to ensure reliable handover for NTN. Simulation results show that enhanced D1 handover outperforms traditional D1 handover, particularly in reducing ping-pong effects and handover failures (HOF).

Keywords

Low earth orbit (LEO), Non-terrestrial networks (NTN), Mobility, Satellite communication, Handover.


Comparison of Training for Hand Gesture Recognition for Synthetic and Real Datasets

Pranav Vaidik Dhulipala, Samuel Oncken, Steven Claypool, and Stavros Kalafatis, Department of Electrical and Computer Engineering, Texas A&M University, College Station, Texas-77845, USA

ABSTRACT

Human gesture recognition is often implemented in many HRI applications. Building datasets that involve human subjects, when aiming to capture comprehensive diversity and all possible edge cases is often both challenging and labor-intensive. While applying the concept of domain randomization to build synthetic datasets helps address the problem, an innate reality gap always exists that needs to be mitigated. In this paper, We present and discuss a comprehensive performance comparison of our synth datasets with real ones and demonstrate the results in this paper.


Evolving Quantum Neural Network Operations With Data Re-uploading, Entanglement, and Consciousness Based on Orch or Theory

Thomas McIver

ABSTRACT

This paper introduces an approach to quantum neural networks that combines the principles of data re-uploading and entanglement. Based on the Orchestrated Objective Reduction (Orch OR) theory proposed by Roger Penrose and Stuart Hameroff, the study explores how quantum mechanical processes can improve neural network capabilities. By reuploading classical data at different stages of computation and utilizing quantum entanglement, the proposed network aims to achieve advanced information processing and learning abilities. This approach not only enhances the network’s performance but also provides insights into the potential quantum basis of consciousness. The incorporation of these quantum operations within a feedback loop further enhances the learning process, potentially resulting in emergent behaviours reminiscent of consciousness.


AIOT-based smart traffic management system

Ahmed Mahmoud Elbasha and Mohammad M. Abdellatif, Electrical Engineering Department, Faculty of Engineering, The British University in Egypt, Cairo, Egypt

ABSTRACT

This paper presents a novel AI-based smart traffic management system de-signed to optimize traffic flow and reduce congestion in urban environments. By analysing live footage from existing CCTV cameras, this approach eliminates the need for additional hardware, thereby minimizing both deployment costs and ongoing maintenance expenses. The AI model processes live video feeds to accurately count vehicles and assess traffic density, allowing for adaptive signal control that prioritizes directions with higher traffic volumes. This real-time adaptability ensures smoother traffic flow, reduces congestion, and minimizes waiting times for drivers. Additionally, the proposed system is simulated using PyGame to evaluate its performance under various traffic conditions. The simulation results demonstrate that the AI-based system out-performs traditional static traffic light systems by 34%, leading to significant improvements in traffic flow efficiency. The use of AI to optimize traffic signals can play a crucial role in addressing urban traffic challenges, offering a cost-effective, scalable, and efficient solution for modern cities. This innovative system represents a key advancement in the field of smart city infra-structure and intelligent transportation systems.

Keywords

AI, ITS,IoT, Traffic Management


Fair-anonymity: a Novel Fairness Notion for Cryptocurrency

Taishi Higuchi and Akira Otsuka, Institute of Information Security (IISEC), Kanagawa, Japan

ABSTRACT

In recent years, there has been a growing demand for using tokens of public blockchains like Bitcoin for legitimate transactions. However, the lack of authoritative guarantees on these tokens raises concerns about their potential misuse in criminal activities. Conversely, the introduction of full transparency regulation may stifle the highly innovative cryptocurrency community. This paper introduces a novel concept of fairness, termed Fair-Anonymity, which allows regulatory authorities to probabilistically trace the payer’s ID with the pre-agreed probability based solely on the total amount of the transaction, even when divided into smaller transactions. The Fair-Anonymity protocol can be applied to many blockchains by adding proof to the transaction, in which public verifiers can verify the result. Our scheme cryptographically enforces the revealing probability using k-out-of-n Committed Oblivious Transfer, ensuring that neither the sender nor the receiver can manipulate the probability or alter the committed values, thus disincentivizing illegal high-value transactions. Conversely, enterprises accepting only tokens with Fair-Anonymity proofs can externally demonstrate their commitment to lawful operations.

Keywords

Blockchain, Security, Electronic-cash, Cryptocurrency, Fairness, Anonymity, Traceability, Oblivious transfer.


Automatic Lung Nodule Segmentation in Ct Images Based on U-net Architectures

Alejandro Jer´onimo1, Ignacio Rojas1, and Olga Valenzuela2, 1Computer Engineering, Automatics and Robotics Department, University of Granada, 18071 Granada, Spain, 2Department of Applied Mathematics, University of Granada, 18071 Granada, Spain

ABSTRACT

Lung cancer is the most common type of cancer worldwide, with 2.5 million new cases reported in recent years, according to the World Health Organization. It also has the highest mortality rate, and early diagnosis is crucial for reducing mortality. Deep Learning techniques, particularly computer-aided diagnosis (CAD) systems, have advanced automatic detection of pulmonary diseases. While many studies propose pipelines with complex architectures, the nnU-Net model provides a robust, automatic framework for segmentation across various medical imaging modalities. This work evaluates nnU-Net’s performance in semantic segmentation of nodules of varying sizes by integrating various preprocessing techniques. Results show improved Dice Score and IoU metrics, especially for large nodules.

Keywords

Deep learning, lung nodules, semantic segmentation, U-Net, nnU-Net.