January 25 ~ 26, 2025, Copenhagen, Denmark
Ram Eshwar Kaundinya, Drexel University, Philadelphia PA 19104, USA
Large language models (LLMs) have led to a leap in generative AI capability with human-like language production across vast domains. While this has been a stunning success in some respects, it has highlighted many of the limitations of a purely connectionist approach to AI. LLMs are not sufficiently grounded in a knowledge base exacerbating problems with reasoning and planning. Fine-tuning and RAG (Retrieval Augmented Generation), are not robust and are limited. This paper takes a symbolic approach to online learning within a trading game environment. I introduce two novel ideas - a memory module based on ideas from cognitive architectures such as ACT-R and a symbolic knowledge graph. This allows for online learning within a dynamic game environment. The novel architecture is generalizable and allows for grounding LLMs in a desired domain without extensive fine-tuning or RAG, enabling the creation of personalized LLM systems.
Large Language Models, Cognitive Architecture, Neurosymbolic Computing.
Hamza Landolsi1, Ines Abdeljaoued-Tej1, 2, 1Engineering School of Statistics and Information Analysis, University of Carthage, Ariana, Tunisia,2Laboratory of BioInformatics bioMathematics, and bioStatistics (LR24IPT09), Institut Pasteur de Tunis, University of Tunis El Manar, 13, place Pasteur, B.P. 74, Belv´ed`ere, 1002, Tunis, Tunisia
Generative Artificial Intelligence (GenAI) is revolutionizing the business world by increasing availability, efficiency, cost reduction, and innovation. This paper explores the application of Large Language Models (LLMs) and GenAI to finance. It proposes a novel framework on how we can imagine robo-advisory systems, from a traditional rigid platform to a more humanized solution that further engages the investor in a hand-picking asset selection process and better understands their goals and profile using LLMs. We designed an end-to-end solution to overcome many limitations such as lack of flexibility in robo-advisors, lack of possible asset types (usually only equities) and the problem of real-time access to high quality data. The solution architecture includes dynamic client profiling, risk aversion estimation and portfolio optimization. It tailored asset selector agent using robust data pipelines to curate the latest market information. Through iterative development, we employed prompt engineering and multi-agent workflows to enhance user interactions and deliver meaningful insights. By developing an innovative chatbot platform, we demonstrate the potential of LLMs to transform customer service, increase engagement, and provide strategic financial advice.
Generative AI, Large Language Models (LLM), Big Data, Practical Applications, Agentic Design Patterns, Finance, Investment analysis, Portfolio Optimization
Florian Freund , Philippe Tamla , and Matthias Hemmje University of Hagen, Faculty of Mathematics and Computer Science 58097 Hagen, Germany
Comparison and selection of Named Entity Recognition (NER) tools and frameworks is a critical step in leveraging NER for Information Retrieval to support the development of Clinical Practice Guidelines. This paper presents a survey based on Kasunic’s survey research methodology to identify the criteria used by Machine Learning (ML) experts to evaluate NER tools and their significance in the selection process. In addition, it examines the main challenges faced by ML experts when choosing suitable NER tools and frameworks. Using Nunamaker’s methodology, the article begins with an introduction to the topic, contextualizes the research, reviews the state of the art in science and technology, and identifies challenges for an expert survey on NER tools and frameworks. This is followed by a description of the survey’s design and implementation. The paper concludes with an evaluation of the survey results and the insights gained, ending with a summary and conclusions.
Expert Survey, Natural Language Processing, Named Entity Recognition, Machine Learning, Cloud Computing.
Zouheir BANOU, Sanaa EL FILALI, El Habib BENLAHMAR, Laila ELJIANI, and Fatima-Zahra ALAOUI, Faculty of Sciences Ben M’Sik – Hassan II University, Bd Commandant Driss Al Harti, 7955, Casablanca, Morocco
Abstract. Figurative language detection is a challenging task in natural language processing (NLP), especially for morphologically rich languages like Arabic. This study investigates the effectiveness of pre-trained language models (PLMs) for detecting hyperbole and metaphor in Arabic, comparing general-purpose models of varying sizes (mT5-Small, mT5-Base, and mT5-Large) with a specialized, fine-tuned model (MMFLD) trained specifically for figurative language tasks. Results indicate that while larger models such as mT5-Large excel in capturing complex figurative expressions, the task-specific MMFLD model achieves competitive performance, especially in metaphor detection. This highlights the benefits of both model size and specialized training in figurative language tasks.
Figurative Language Detection, Arabic NLP, Pre-trained Language Models (PLMs).
Heidrich Vicci, College of Business, Florida International University, USA
The IoT provides users with dynamic and rich context-aware services that are highly responsive to the user needs. The users can remotely monitor and control the environment. The IoT will allow more direct integration of the physical world into computer-based systems, resulting in improved accuracy, efficiency, and economic benefit in addition to reduced human intervention. The basic premise is to have objects or things working for humans rather than humans working for them. As we can see, the usage of the term follows a similar trend in both economics and search engine results, which could be a good indicator of the existence of a correlation between the two. More than 16 years after its inception, technology remains a hot topic both in the business world and in academia. (Jagarlamudi et al.2022)(Pradeep et al.2021)(Pradeep et al.2021)
Internet of Things (IoT), computer-based systems, human intervention.
Zhixiang Zhang1, Ang Li2, 1Ruben s Ayala High School, 14255 Peyton Dr, Chino Hills, CA 91709, 2California State University, Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840
This project aims to solve the problem of providing real-time, personalized feedback on basketball shooting form using machine learning (ML). By comparing a user’s body angles during their shot to those of professional players, the program delivers tailored suggestions for improvement. The core technologies used include pose detection through computer vision and a machine learning model that analyzes and compares joint angles. Challenges included fine-tuning the model’s confidence score to ensure accurate comparisons between users and pros, handling image quality issues, and providing clear feedback to users of different skill levels. The experiments showed that when professional players were compared to themselves, the system returned very high similarity scores, confirming the model’s accuracy. The project stands out because of its personalized feedback feature, helping both beginner and advanced users improve their shooting form. By addressing common limitations such as image quality and skill variability, this tool offers a unique solution for athletes looking to refine their performance.
Basketball Analyze, Machine Learning Comparison, Computer Vision.
Zahraa Shams Alden1, 2 and Oguz Ata3, 1University of Altinbas, Electrical and Computer Engineering, Turkey, 2University of Kerbala, Tourism Science, Iraq, 3University of Altinbas, Information Technology, Turkey
The analysis of medical images is a very risen area of study and the speed and precision necessary in medical image analysis. Deep learning may aid in resolving medical image processing issues including labelled datasets by experts to learn effectively. This can be difficult to achieve in the medical field, where access to large amounts of labeled data may be limited. Another challenge is the complexity of medical data. Therefore, this study proposed a deep neural network-based model for medical imaging to detect osteoporosis using transfer learning with MobileNetV2. Class weights are used to alleviate class imbalance, and the learning rate schedule improves model adaptability. The model was created in two variants: one with a learning rate schedule and class weights with an accuracy of 96%, and the second model with only a learning rate schedule with an accuracy of 94%. The anticipated experimental results should illustrate the efficiency of the proposed framework for the future designing of deep learning models for predicting bone fracture and speeding up medical data analysis and interpretation.
Medical image analysis, Machine learning, CNN, Transfer Learning, Osteoporosis, Deep learning, MobileNetV2.
Li-Sheng Chen1 and Shu-Han Liao2, 1Department of Computer Science and Information Engineering, National Ilan University, Ilan, 260007, R.O.C., 2Department of Electrical Engineering, Tamkang University, New Taipei, 251301 , R.O.C.
Low Earth Orbit (LEO) satellites exhibit high mobility, leading to frequent handover challenges. Addressing these handover issues is crucial for maintaining seamless and stable service connections. In this paper, we tackle the handover problems in LEO by utilizing the D1 events, as discussed in the 3rd Generation Partnership Project (3GPP). Unlike terrestrial networks, the difference between the reference signal received power (RSRP) at the cell edge and the cell center is minimal in non-terrestrial networks (NTN). Therefore, 3GPP has been exploring location-based handover methods using absolute thresholds instead of comparing the RSRP of serving and neighboring cells in handover events. We introduce the D1 event as a handover trigger and explore handover parameters in conjunction with the UE’s position (referred to as enhanced D1) to ensure reliable handover for NTN. Simulation results show that enhanced D1 handover outperforms traditional D1 handover, particularly in reducing ping-pong effects and handover failures (HOF).
Low earth orbit (LEO), Non-terrestrial networks (NTN), Mobility, Satellite communication, Handover.
Pranav Vaidik Dhulipala, Samuel Oncken, Steven Claypool, and Stavros Kalafatis, Department of Electrical and Computer Engineering, Texas A&M University, College Station, Texas-77845, USA
Human gesture recognition is often implemented in many HRI applications. Building datasets that involve human subjects, when aiming to capture comprehensive diversity and all possible edge cases is often both challenging and labor-intensive. While applying the concept of domain randomization to build synthetic datasets helps address the problem, an innate reality gap always exists that needs to be mitigated. In this paper, We present and discuss a comprehensive performance comparison of our synth datasets with real ones and demonstrate the results in this paper.
Thomas McIver
This paper introduces an approach to quantum neural networks that combines the principles of data re-uploading and entanglement. Based on the Orchestrated Objective Reduction (Orch OR) theory proposed by Roger Penrose and Stuart Hameroff, the study explores how quantum mechanical processes can improve neural network capabilities. By reuploading classical data at different stages of computation and utilizing quantum entanglement, the proposed network aims to achieve advanced information processing and learning abilities. This approach not only enhances the network’s performance but also provides insights into the potential quantum basis of consciousness. The incorporation of these quantum operations within a feedback loop further enhances the learning process, potentially resulting in emergent behaviours reminiscent of consciousness.
Ahmed Mahmoud Elbasha and Mohammad M. Abdellatif, Electrical Engineering Department, Faculty of Engineering, The British University in Egypt, Cairo, Egypt
This paper presents a novel AI-based smart traffic management system de-signed to optimize traffic flow and reduce congestion in urban environments. By analysing live footage from existing CCTV cameras, this approach eliminates the need for additional hardware, thereby minimizing both deployment costs and ongoing maintenance expenses. The AI model processes live video feeds to accurately count vehicles and assess traffic density, allowing for adaptive signal control that prioritizes directions with higher traffic volumes. This real-time adaptability ensures smoother traffic flow, reduces congestion, and minimizes waiting times for drivers. Additionally, the proposed system is simulated using PyGame to evaluate its performance under various traffic conditions. The simulation results demonstrate that the AI-based system out-performs traditional static traffic light systems by 34%, leading to significant improvements in traffic flow efficiency. The use of AI to optimize traffic signals can play a crucial role in addressing urban traffic challenges, offering a cost-effective, scalable, and efficient solution for modern cities. This innovative system represents a key advancement in the field of smart city infra-structure and intelligent transportation systems.
AI, ITS,IoT, Traffic Management