John Tsiligaridis, Department of Math and Computer Science, Heritage University, Toppenish, WA, USA
Autoencoders (AEs) are Deep Learning (DL) models that are well known for their ability to compress and reconstruct data. When an AE compresses input data, a latent space is created which yields a compressed representation of the original data with a smaller set of features. Genetic Algorithms (GAs) based on evolutionary principles can be used to optimize various hyperparameters of a DL model. This work involves two tasks. First, it focuses on the application of an AE on image data along with various configurations of the AE structure and its constituent encoder/decoder structure using Multi-Layer Perceptrons (MLPs). Visualizations of the AE loss functions during training are provided, along with various latent space results obtained using clustering techniques. The second focus of the paper is on the application of the GA on a Convolutional AE where optimization of the Convolutional Neural Networks (CNN) encoder/decoder structures is done by converting the architecture into genes for image classification. We see that the AE is a flexible and robust model that can successfully be applied on a variety of image datasets and the GA model surpasses the AE model.
Machine Learning, Deep Learning, Autoencoders, Genetic Algorithms.
Sai Kiran Padmam, Partha Sarthi Samal,independent,United States of America
Online retail has come a long way since its early days of static web pages and manual price comparisons. The new frontier embraces artificial intelligence (AI) to interpret user queries, mediate real-time auctions among multiple vendors, and deliver personalized recommendations at blazing speeds. This paper highlights how such AI agents operate under the hood, drawing upon machine learning, reinforcement learning, and multi-agent coordination principles. We also offer glimpses into emerging research challenges and future directions that may reshape online shopping entirely.
Andreas Shaji Jacob, Kulwant Singh, Muhammad Shafique, Ruben Movsisyan, Seungbin Lee, and Ugur Randa, Pacific States University, USA
The rapid adoption of Artificial Intelligence (AI) across industries has revealed limitations17 in traditional requirement analysis methodologies, which were not designed to address the com-18 plexities and iterative nature of AI-based projects. This paper proposes a refined thought19 process for requirement analysis tailored to the needs of AI-driven initiatives, whether AI is the20 primary focus or an integrated component of a larger system. By emphasizing the dynamic21 interplay between data, models, and deployment environments, the proposed approach departs22 from linear methodologies, advocating for an adaptive and iterative process. Using case studies,23 we demonstrate how this concept ensures better alignment with business goals, enhances data24 utility, and improves model performance while addressing ethical considerations and practical25 constraints. This paper aims to provide practitioners, researchers, and project owners with26 actionable insights to optimize AI project outcomes in an increasingly complex technological27 landscape.
Lutfur Rahman Fahad, Mukit AL Elahi, Nayem Miah, Himel, Somon, Niaz Dhara, Mia, and Adipta, Department of Computer Science, Pacific States University, USA
Agile Scrum methodology is widely regarded as a transformative approach to software de velopment, emphasizing flexibility, collaboration, and iterative progress. However, its adoption is not without challenges, which vary significantly across organizations of different sizes, in-dustries, and geographic distributions. This study explores recurring issues such as resource constraints, communication barriers, and resistance to change, while highlighting trends and successful practices in addressing these challenges. By synthesizing insights from existing lit-erature and limited interviews, this research aims to provide actionable recommendations and a tailored framework for organizations seeking to optimize their Agile Scrum implementation.The findings contribute to a deeper understanding of how industry-specific contexts influence the effectiveness of Agile practices
Jannatul Mawa, Md Nafis Azad Nobel, Sajeet Raj Aryal, Kazi, Taehyun Kim, Pritom Das, Mahbu Khan, Department of Computer Science, Pacific States, University, USA
The cloud computing environment offers significant flexibility and access to computing re- sources at a reduced cost. This technology is rapidly transforming the landscape of e-services across various fields. In this paper, we examine cloud computing services and applications, high- lighting examples of services offered by leading Cloud Service Providers (CSPs) such as Google, Microsoft, Amazon, HP, and Salesforce. We also showcase innovative cloud applications in areas like e-learning, Enterprise Resource Planning (ERP), and e-governance. This study aims to help individuals and organizations recognize how cloud computing can deliver customized, reliable, and cost-effective solutions across a diverse range of applications.
Sheresh Zahoor1, Pietro Li`o1, Ga ̈el Dias2, and Mohammed Hasanuzzaman3, 1University of Cambridge, 2Normandie Univ, GREYC, 3Queens University Belfast
Diabetes is a global health crisis, demanding advanced genomic approaches to uncover molecular mechanisms and identify therapeutic targets. This study introduces the Genomic Causal Framework (GCF), a novel approach combining genomic data analysis, causal modeling, and predictive analytics to provide actionable insights into diabetes pathology. These include identifying potential therapeutic targets, such as CXCL8, S100A8, and COL1A1, implicated in chronic inflammation and complications like diabetic nephropathy. The framework also highlights regulatory genes, such as ROBO1 and FCGR2A, as upstream drivers of disease progression. Using the Diabetes genome dataset (GSE132831), we identify differentially expressed genes (DEGs) with pyDESeq2, stratifying upregulated and downregulated genes. These DEGs form the basis for constructing a protein-protein interaction (PPI) network, revealing critical functional pathways. The GCF framework integrates Causal Bayesian Networks (CBNs) and Probability Trees (PTrees) to move beyond prediction and enable causal reasoning. CBNs model causal relationships between genes and diabetic outcomes, while PTrees quantify their impact. Achieving 82.22% accuracy and 95% recall, GCF ensures reliable patient identification, with SHAP analysis enhancing interpretability and biological relevance. Its integration of causal reasoning with predictive analytics prioritises biologically relevant features for clinical and research applications. By bridging causal inference with functional genomics, this study advances biomarker discovery and therapeutic target identification, providing a powerful tool for precision medicine in Type 2 Diabetes. Unlike traditional machine learning, our approach enhances interpretability while uncovering critical insights into disease development and progression.
Diabetes, Causal Bayesian Network, Probability Trees, Genomics.
Kostas Dimitrios1 and Kostas Ioannis2, 1National and Kapodistrian University of Athens, 2University of Piraeus
In this article we will analyse how Information and communication technologies can be used in teaching Algorithms Information and communication technologies use algorithms can help create numerous applications that can solve a wide variety of problems.by University students. There are many applications in Universities specifically in laboratories teachers and students use algorithms to solve numerical and algebraic system problems. In Partial differential equations, we use Numerical Analysis to solve a system of PDEs. In research, also most people use Algorithms to validate their theoretical findings. Students need to use suitable algorithms to solve problems. Nowadays many people use Algorithms to solve problems in their companies or in their organisation and in every context, algorithms are widely used. This work is about teaching algorithms used to solve problems using the numerical analysis methods.
Algorithms, Innovative, technology, Analysis.
Mshabab Alrizah, Jazan University, Jazan, Saudi Arabia
EasyList is a widely used filter list that enhances online privacy and security by blocking tracking mechanisms, advertisements, and other unwanted web elements. As an open-source project, its sustainability is based on collaborative contributions, issue tracking, and continuous updates to address emerging challenges. GitHub plays a crucial role in facilitating the management of EasyList, offering tools for version control, issue resolution, and community-driven improvements. This study explores the complexities of maintaining EasyList on GitHub by analyzing multiple-year issue reports. Through data collection, trend analysis, and resolution efficiency evaluation, this research provides insight into contributor engagement, frequently reported domains, and the overall effectiveness of the maintenance process. The findings highlight the importance of community participation in maintaining EasyList, the ongoing need for adaptive strategies against evolving tracking and ad-serving techniques, and the broader implications for open-source project management.
Ad-blocking, open-source maintenance, GitHub, tracking prevention, filter lists, crowdsourcing collaboration.
Theodora-Stavroula Korma, Department of Communication and Information studies, Rijkuniversiteit Groningen, Groningen, The Netherlands
Predictive policing, an algorithm-driven crime prevention initiative, claims to render the criminal justice system more effective and neutral. Yet, this essay argues that these algorithmic models reinforce system-level prejudices and unfairly focus on over marginalized populations while amplifying injustice. As these models draw from historical data covering four decades shaped by biased police operations, they can magnify racial profiling and harden social hierarchies. Furthermore, these systems lack of transparency and accountability has ethical consequences on surveillance, due process, and civil rights violations. In line with Design Justice principles, this paper calls for a redesign of predictive policing that is not about control by systems but the empowerment of communities. Instead of being used as enforcement tools, these algorithms must be redesigned to address root causes of social harm, promote equitable resource allocation, and engage communities in decision-making. Through participatory governance and moral algorithmic design, predictive technologies can serve justice rather than subvert it, so that communities are protected, not monitored.
Predictive policing, algorithmic bias, systemic injustice, racial profiling, Design Justice.
Anthony Chidi Nzomiwu & Michael Nwobodo, Krakow University of Economics, Poland
This comprehensive review examines the convergence and integration of emerging technologies and their transformative impact across industries, drawing from empirical research and documented case studies from 2000 to 2025. The analysis demonstrates how the fusion of physical, digital, and biological technologies has created unprecedented synergies, fundamentally altering traditional operational paradigms. Through detailed examination of implementations in manufacturing, healthcare, financial services, and agriculture, the study reveals patterns of successful technology integration and their measurable impacts. Key findings indicate that successful technological transformation requires systematic attention to three critical dimensions: technical infrastructure, organizational capabilities, and societal implications. The research identifies significant challenges, including interoperability issues, security vulnerabilities, workforce transformation, and ethical considerations, while providing evidence based frameworks for addressing these challenges. The study contributes to both theoretical understanding and practical implementation of emerging technologies, offering insights for policymakers, business leaders, and researchers. The conclusion synthesizes strategic implications for future technological development, emphasizing the need for integrated approaches to innovation, governance, and sustainability.
Technological convergence; Digital transformation; Industry 4.0; Systems integration; Innovation management; Artificial intelligence; Sustainable technology; Digital ethics; Organizational capabilities; Cyber-physical systems; Smart manufacturing; Digital infrastructure; Technological innovation.
Vasanta Kumar Tarra, Lead engineer at Guidewire software
For efficient engagement with outside stakeholders, modern digital businesses rely more and more on partner portals; nevertheless, keeping security and scalability in multi-tenant systems is somewhat challenging. This paper investigates the necessary criteria for safe multi-tenant portals and suggests a useful solution using smooth Salesforce interaction with AWS S3. The article presents a modern answer using the benefits of both systems: Salesforce for its strong CRM features and AWS S3 for its exceptional scalability and storage security. Conventional techniques could encounter challenging complex data segmentation, performance problems, and compliance rules. The thorough architectural notes in the paper on the dynamic management, safe storage, and effective data retrieval over numerous tenants neither compromise privacy or performance. To guarantee data isolation and integrity, basic strategies call for the adoption of Salesforces granular sharing models, AWS S3s bucket limits, and rather constrained API interfaces. Important results show that this integration not only helps to control partner ecosystems but also significantly reduces operating risks and infrastructure costs. It also shows how businesses could satisfy strict security and compliance standards while nevertheless giving their partners a flawless, basic interface. Providing companies of all kinds with a scalable platform, the solution addresses pragmatic issues including changing security concerns, growing data volume, and different partner needs. This paper provides necessary support for IT architects, developers, and business leaders wanting to future-proof their partner engagement activities utilising established technologies as Salesforce and AWS S3 by means of a safe, scalable, and successful multi-tenant portal design. Emphasising under a multi-tenant system the need of governance, automation, and proactive monitoring with a secure integration plan, the paper stresses how using Salesforces automation tools—Flow and Apex among others—along with AWSs natural event-driven architecture could boost operational efficiency and reduce hand-off involvement. Covering lifecycle management, best practices in encryption, and role-based access control (RBAC)—key elements of a safe environment—the article provides The conversation also covers how the Experience Cloud personalising of partner-facing interfaces and dynamic presentation of S3-hosted content inside the Salesforce user interface help to improve user experience. These advances ensure that partners acquire the required knowledge in a timely and appropriate surroundings. The recommended structure helps companies to be more quickly altered to meet market needs by allowing agility and quick onboarding of new partners. As a technical guidebook as well as a strategic framework for creating future-ready partner portals, combining technical clarity with pragmatic applicability lets the essay link high-level strategy to real-world execution.
Salesforce, AWS S3, Multi-Tenant Architecture, Partner Portal, Cloud Security, Data Segregation, Identity Management, Integration
Yasodhara Varma Rangineeni, Vice President at JP Morgan & Chase, USA
Scales of training for ML models provide a major challenge in the era of data-driven decision-making. Extreme datasets, more complex models & fast experimentation need extremely effective distributed systems. This work explores the improvement of distributed model training by means of three strong instruments for parallel computing, cluster management & also GPU acceleration: Dask, Ray, and EMR Rapids. The article looks at how these technologies could speed the time to insights, reduce bottlenecks & maximize their model training processes. We provide a complete architecture for distributed training of big models using Dasks flexible task scheduling, Rays ability to scale workloads across clusters & GPU-accelerated data processing. The approach given covers architectural issues, performance criteria across many use cases & also optimization strategies. Our findings show compared to traditional methods significant increases in training time, resource efficiency & also scalability. Combining these technologies guarantees scalability for future needs, maintains cost-effectiveness & offers a quick way to manage their significant model training. This paper provides a practical road map for companies hoping to use distributed systems to improve their machine learning systems, hence enabling more effective scaling, testing & model deployment.
Distributed model training, Dask, Ray, EMR Rapids, scalable machine learning, parallel computing, GPU acceleration, big data processing, machine learning frameworks, distributed computing, optimization techniques, performance evaluation, real-time applications, scalability, resource utilization, load balancing, model training pipeline, data preprocessing, performance metrics, resource allocation, machine learning workflows, parallelization.
Ali Asghar Mehdi Syed, IT Engineer at World Bank Group, USA
Originally just something businesses did to avoid problems, disaster recovery (DR) is now a must for any corporation using numerous clouds for their operations. Controlling emergency recovery across various cloud services is difficult for many rather important reasons. Complicated processes, long-lasting repairs, human mistake, and uneven outcomes among these issues might perhaps compromise the companys existence. Automatic disaster recovery solutions are pretty important in cases where downtime might cost a lot of money and customers expect service to be always available. This project creates a full autonomous management system for many different settings cloud-based emergency recovery. The most important thing is reducing the waste produced by hand-made activities. Simple cloud corporate communication, health-based decision-making, and real-time monitoring tools aid to accelerate maintenance times and raise system reliability. These advances let one adapt mending techniques to meet different types of disturbances and follow guidelines in complex and dispersed surroundings. Many studies and actual models show that lowering recovery time objectives (RTOs) and recovery point goals (RPOs) makes operations significantly more strong and efficient use of resources. The method proved scalable in many other industries, including retail, healthcare, and finance, therefore allowing its use in many more others. This study shows that automated disaster recovery transcends mere technical advancement of objects. Businesses have to keep customer trust, follow the rules, and shine in a digital market where time is of great relevance. Automating processes to raise their resilience will enable businesses to go from a reactive to a proactive crisis management approach, therefore ensuring stability. This turns a major shortfall into a benefit that will help them to be successful over long run.
Disaster Recovery, Multi-Cloud, Automation, Orchestration, Fault Tolerance, High Availability, Resilience Engineering, Cloud Computing, Recovery Time Objective (RTO), Recovery Point Objective (RPO), Business Continuity, Failover Management, Disaster Recovery as a Service (DRaaS), Cloud Resilience, Cross-Cloud Replication, Disaster Preparedness, System Redundancy, Infrastructure as Code (IaC), Automated Failover, Cloud-native Disaster Recovery.
Arugula Balkishan, Sr. Technical Manager at Hexaware Technologies, USA
Digital banking has come through a great evolution period after passing from the level of mere online services to the core of customer engagement and loyalty. Nowadays customers are the ones who expect fast, smooth, and customized banking services not only from one point but across all channels, and the old kind of systems cant handle this fast enough. Hence to try to reach these soaring expectations, more and more various sizes of the banks are turning the agile way, for example scaling up the solutions and by doing so, they are able to innovate fast. Microservices architecture has a major impact in the transition as business entities are able to dismantle intricate systems into smaller, autonomous services, which can be built, put into action, and even retrofitted independently of the entire platform. This margin of operation not only naturally reduces the time taken to come into the market but also creates more grounded system elasticity and better scaling capability. Within our discussion, the convergence of two giants such as the premier engagement banking tool—Backbase and powerful cloud services by Microsoft Azure forms an extremely convincing resolution. Backbase makes available the means to realize consumer-oriented digital interactions, Azure guarantees expandability, and security as well as smooth integration with the presently ordered systems. Subjected to this the banks are given the opportunity to upgrade their platforms without the headache of much risky and large scale chores of nucleus system replacements. In the paper, we advocate a structured methodology that aligns the principles of microservices with the capacities of Backbase and Azure so as to make banking platforms digitally innovative, agile and resilient. We reveal how this course further improves the deployment flexibility, operational efficiency gains, and makes higher customer satisfaction a reality. The results we have achieved show empirical returns, reflecting cost-effective and efficient development, reduced downtimes, and finally higher satisfaction rates. The essence of this model consists in the adaptation of microservices that drive the technological reform and the full advantage of modern integration tools in order to make banks only stay not suffer and the forefront of the innovation path that is future-proof. Consequently, what we emphasize are the steps of our science-driven method to keep the pace of digital disruption and take over the lead in the attempt to create futuristic-oriented financial services, a secure and easy-to-scale solution that business owners can adjust to the dynamic situation of the market and satisfy the needs of the customers.
Digital Banking, Microservices, Backbase, Azure Integration, Cloud Computing, Scalability, API Management, Financial Technology, Platform Optimization, Customer Experience, Core Banking Modernization, Agile Transformation, DevOps, Open Banking, Banking-as-a-Service (BaaS), Digital Transformation, Cloud-Native Applications, Security and Compliance, User-Centric Design, Omnichannel Banking, Continuous Deployment, Infrastructure as Code (IaC), Containerization, Kubernetes, Serverless Computing.
Sangeeta Anand1, Sumeet Sharma2, 1Senior Business System Analyst at Continental General, USA, 2Senior Project manager at Continental General, USA
Strict government regulations, digital health initiatives, and massive patient data are driving fast changes in the healthcare sector. Legal compliance and scalable performance requirements might often be simultaneously unmet by conventional on-site data warehouse solutions. In this article, we explore how current cloud-native data platform Snowflake may assist healthcare companies addressing all these challenges. This study mostly addresses how to use Snowflakes unique architecture to make sure you follow industry rules like HIPAA and the Health Information Technology for Economic and Clinical Health (HITECH) Act, while also making it simple to rapidly expand to meet your expanding data needs and analytical needs. Architectural best practices in this paper help to safeguard healthcare data assets all through their lifetime. These comprise configuring safe Virtual Private Snowflake (VPS) environments, applying role-based access restrictions (RBAC), automatically encrypting data, and building fine-grained audits. Important for HIPAAs Privacy and Security Rules, it centres on data masking and dynamic data access rules to make exchanging data safe while safeguarding patient privacy. The discussion covers how research organisations, payers, and providers could collaborate on data in a way that complies with regulations and is safe. Apart from security and compliance, the paper addresses performance and scalability choices required to manage the particular requirements of healthcare environments. It examines Snowflakes "elastic computing" method, which allows distinct scaling of storage and computation resources. This allows companies to manage shifting responsibilities including regulatory reporting, real-time claims processing, and clinical analytics without additional expenditure. Examining techniques to control expenses helps one ensure that the company can remain successful as numbers increase. These comprise leveraging materialised views and clustering to speed up searches and using resource monitoring.
Snowflake, Healthcare Compliance, HIPAA, Data Warehousing, Risk Stratification, Gibbs Sampling, MCMC Methods, Healthcare Analytics, Patient Risk Prediction, Bayesian Inference, Healthcare Data Modeling, Privacy-Preserving Analytics, Big Data in Healthcare, Predictive Modeling, Comorbidity Analysis, Clinical Decision Support, Data Privacy, Health Informatics, Healthcare Data Security, High-Dimensional Data Analysis.
Parth Jani, Project manager at Molina Healthcare, USA
Essential for Industry 4.0, predictive maintenance allows companies to identify equipment issues and schedule speedy repairs, therefore lowering running costs and unnecessary downtime. Although generally useful, typical predictive maintenance approaches might overlook the dynamic and quick changing nature of modern manufacturing environments. Reinforcement learning (RL), a subfield of machine learning whereby agents produce optimal responses via interaction with their environment, offers a possible solution to this challenge. This paper looks at using reinforcement learning methods in predictive maintenance systems for intelligent manufacturing.We aim to create adaptive maintenance protocols leveraging reinforcement learning that continuously adapt depending on real-time data from industrial equipment. We consider the maintenance schedule problem as a Markov Decision Process (MDP) and show how reinforcement learning agents can develop the ability to balance maintenance expenses with the possibility of equipment failure, hence enhancing long-term operational efficiency. Our simulations and case studies indicate that reinforcement learning models outperform conventional rule-based and statistical approaches by dynamically responding to machine conditions, production cycles, and multiple failure sources. Using actor-critic architectures, policy gradient methods, and Deep Q-Networks (DQN), we investigate probable relevance in this field. The models exhibit scalability and endurance having been taught on data reflecting many operational scenarios. This paper also covers important issues including reward function construction, exploration-exploitation conflicts, and computing costs associated to the implementation of reinforcement learning in pragmatic manufacturing environments. We define a system architecture combining reinforcement learning models with industrial IoT (IIoT) platforms and digital twins to support continuous learning and closed-loop feedback control. Reinforcement learning represents a dramatic transformation in predictive maintenance by letting intelligent, self-optimizing systems capable of performing challenging, context-sensitive decisions. This work provides the framework for scalable, autonomous maintenance systems able to significantly increase the dependability and output of intelligent industrial environments.
Predictive Maintenance, Smart Manufacturing, Reinforcement Learning, Equipment Failure, Machine Learning, Time-to-Failure Prediction, Anomaly Detection in Manufacturing, Reward Function Design, Exploration vs. Exploitation in RL, Multi-Agent Systems for Maintenance, Adaptive Learning in Industrial Systems.
Pavan Paidy1, Krishna Chaganti2, 1AppSec Lead At FINRA, USA, 2Associate Director at S & P Global, USA
In the fast Agile development environment—including security without interfering with delivery—security itself has become a primary difficulty. This paper suggests a practical approach employing threat modeling and Static Application Security Testing (SAST) from the beginning of the Agile Software Development Life Cycle (SDLC) to harmonize the disparities. Early in the process, security has to be included—into the design and coding stages rather than waiting to manage it later. By including threat modeling into the sprint planning and using user stories to enhance it, teams may actively find any weaknesses before code development starts. Integration of direct SAST technologies into CI/CD pipelines allows instantaneous code security feedback free from interfering with development operations. Apart from automated SAST scans starting on code commit, the approach consists on cooperative threat modelling sessions including security engineers, testers, and developers. Early detection of major design flaws and insecure coding techniques by this dual-layered approach greatly reduced the time and cost of correction. Teams claimed their awareness of safe coding guidelines grew and production security issues declined. Clearly aggregating these techniques into Agile processes improves general application resilience, balances with DevOps speed, and promotes a culture of security accountability. This integration helps teams in DevSecOps—where speed and security must coexist—to create safe software constantly without sacrificing agility.
Agile SDLC, Secure DevOps, Threat Modeling, Static Application Security Testing (SAST), Application Security, DevSecOps, Secure Coding, Security Automation, Risk Mitigation, Software Assurance, Shift Left, CI/CD Security, Vulnerability Management, Code Security, Security by Design, Security Integration, Developer Awareness, Threat Analysis, Agile Security, Security Testing, Secure Pipelines, SDLC Integration, Code Review, Security Culture, Security Feedback Loop, Secure Software Development, DevOps Security, Continuous Security, Agile Threat Modeling, Automation in Security, Application Resilience, Security Best Practices, Developer-Centric Security, Design-Time Security, Pipeline Security, Continuous Testing, Security Compliance, Agile Practices, Security Engineering, Integrated Security Tools, Security Collaboration, Early Detection, Code-Level Security, Collaborative Security, Risk-Based Testing, Prevention over Cure, Security Framework, Secure Agile Workflows, Embedded Security.
Anusha Atluri1 and Teja Puttamsetti2, 1SR Oracle Techno Functional consultant at Oracle, USA, 2Senior Integration Specialist at Caesar’s Entertainment, USA
Modern digital era businesses find increasing need for cloud-native solutions offering scalability, agility, and better operational efficiency as they replace out-of-date legacy systems. Often complex, rigid designs, inadequate systems, and data silos impede down efforts at modernization. This paper looks at how Oracle Fusion Applications mixed with Oracle Integration Cloud (OIC) provide a strategic and effective way of handling these problems. While OIC offers a strong integration platform for optimum communication between cloud & on-site systems, Oracle Fusion is a whole collection of business solutions meant to streamline basic company processes. OICs low-code, user-friendly platform free of customization allows businesses to quickly create relationships, simplify processes, & synchronize data across systems. This paper offers a tie-red deployment strategy allowing companies to incrementally improve procedures while preserving significant historical data. This approach presents lower running friction, faster data flow, improved user experiences, and better decision-making by means of aggregated insights. Oracle Fusion and OIC taken together show businesses who are not just improving their technology but also their operating structure to be more flexible, smart, and future ready. Beyond these advantages, this strategy lets companies always improve their digital capacity, expand naturally, and alter with time to meet their needs. Strong cloud apps mixed with flexible integration technologies offer an amazing path for innovation and development as companies expand. Our work presents not just feasibility but also a driver of ongoing digital transformation combining older systems with current cloud architecture by way of suitable tools and approaches.
Oracle Fusion, Oracle Integration Cloud, Legacy Modernization, Cloud Migration, System Integration, Digital Transformation, Middleware, Enterprise Architecture, SaaS Integration, API Management, Business Process Automation, Hybrid IT
Abdul Jabbar Mohammad, UKG Lead Technical Consultant at Metanoia Solutions Inc, USA
In the fast-paced manufacturing sector, where time is critical and worker productivity significantly influences profit margins, this case study investigates strategic integration of Boomi and Kronos technology to improve labour management efficiency. Without interfering with the present infrastructure, the objectives were obviously to merge multiple data sources, apply labour scheduling, and increase operational agility. Using a hybrid integration approach combining Boomis broad cloud-based integration platform with the precision of Kronos labour management systems, the manufacturing company real-time synced timekeeping, attendance, and HR data across several departments. This link made dynamic labour projections feasible, enhanced compliance monitoring feasible, and greatly lowered possible scheduling mistakes practicable. Emphasizing Boomis prebuilt connections in line with bespoke API flows appropriate for high-volume transactional needs, our approach offered flexibility and the lowest downtime during deployment. Among the shocking findings were a 40% rise in schedule accuracy, a 25% drop in labour-related compliance problems, and much employee pleasure brought about by more open scheduling. Boomis scalability for enterprise-wide orchestrating enables IT firms to hybridize their localized control over critical data. This is special since it not only solves an IT problem but also increases worker adaptability in a demanding environment. Apart from basic system communication, this article shows how exactly Boomi-Kronos links could affect operational reflexes in real time, team management, and work processes.
Boomi, Kronos, workforce optimization, hybrid integration, high-scale manufacturing, labour efficiency, API orchestration, real-time data sync, scheduling automation, integration platform as a service (iPaaS), employee scheduling, timekeeping integration, operational agility, HR data synchronization, labour compliance, cloud integration, workforce analytics, manufacturing efficiency, system interoperability, automation efficiency.
Sai Prasad Veluru1 and Mohan Krishna Manchala2, 1Software Engineer at Apple Inc, USA, 2ML Engineer at Meta, USA
A basic building block of modern geospatial data systems, actual time streaming allows applications like dynamic traffic forecasting & fleet monitoring as well as live navigation on platforms like Apple Maps. Millions of devices all around provide high-frequency, location-specific data that these systems rely on their gathering & also evaluating. Building scalable pipelines capable of managing high velocity & volume in actual time raises several technological challenges like the control of data surges, the maintenance of low-latency processing & the guarantee of system stability & more accuracy. To satisfy these needs, this paper investigates the effective merging of Apache Kafka with Apache Spark Structured Streaming, therefore providing strong, high-throughput pipelines for geographic data input & also data processing. We provide a practical architectural framework tailored for geo-streaming uses including design decisions, deployment techniques & more fault tolerance mechanisms. By means of thorough performance testing under simulated actual world situations, we find important bottlenecks and provide more optimization techniques that significantly increase throughput and reduce their processing latency. Common in mobile environments, our studies investigate schema creation for location data, windowed aggregations for spatiotemporal analysis & approaches for handling late-arriving and out-of-order data. Anchoring our research in useful applications, this article offers a clear paradigm for engineering teams trying to create or grow actual time geospatial services. The latest insights improve system performance and help to create more intelligent, responsive location-based applications in industries like mobility services, urban planning & also logistics.
Real-time streaming, Apache Kafka, Apache Spark, Geospatial data, Apple Maps, Distributed systems, High-throughput ingestion, Location services, Streaming architecture, Data pipeline
Swetha Talakola, Quality Engineer III at Walmart, Inc, USA
Following policies, depending on accurate and timely information to guide decisions, and assessing performance in the modern data-driven corporate environment help organizations stay compliant. Growing companies have increasingly complex reporting systems that cause significant problems like data inconsistencies, errors, and human validation process bottlenecks. Many data sources, regular modifications, and strict deadlines might produce contradicting reports, inconsistent KPIs, and general mistrust in the data. This paper looks at how consistency, accuracy, and organizational standards adherence might help to reduce reporting problems by means of automation. We investigate the basic causes of reporting disorder—including different data pipelines and the difficulty of handling last-minute changes—as well as how automated report validation may greatly reduce these difficulties. Automation accelerates the validation process and helps to promptly provide findings by improving data dependability and lowering the possibility of human error. By integrating automated validation into current CI/CD systems and creating thorough test cases, companies may help to create repeatable, scalable validation processes guaranteeing accuracy in all reports. Moreover, automation relieves teams from human validation responsibilities so they may focus on more strategic activities. By means of real-world case studies, we show how companies have efficiently used automated report validation, thereby producing benefits like lower mistakes, faster time-to--market, and improved data accuracy. The research underlines that automated report validation creates a culture of data trust, enhances cooperation between technical and business teams, and makes more flexible and responsive reporting methods possible than just error avoidance. By means of automation, companies may maximize their reporting systems and raise general operational effectiveness and decision-making capability.
Report validation, automation tools, and scalable workflows ensure error-free enterprise reporting, improve data accuracy, maintain consistency, support business intelligence, and build trust in analytics through reliable, audit-ready, and end-to-end validated insights.
Kelvin Ovabor & Travis Atkison, The University of Alabama Tuscaloosa, Alabama, USA
In cloud computing, efficient resource allocation within data centers is crucial for reducing energy consumption and operational costs. Virtual Machine Placement (VMP) is a critical aspect, involving the strategic assignment of Virtual Machines (VMs) to physical servers. However, inefficient VM placement can lead to increased energy usage, posing significant challenges to operational efficiency and cost-effectiveness. This paper introduces a novel approach to VM placement, with the aim of minimizing total energy consumption within data centers. Leveraging the Ant Colony Optimization (ACO) algorithm, we customized its information heuristic based on the energy efficiency of physical machines (PMs) within data centers. Experimental validation demonstrates the scalability of our approach in large data center environments, where it notably outperforms the selected benchmark, the ACOVMP (Ant Colony Optimization Virtual Machine Placement) algorithm, in terms of energy consumption. Our findings highlight the effectiveness of our approach in optimizing VM placement decisions, contributing to ongoing efforts to enhance energy efficiency and operational sustainability in cloud data center environments.
Cloud, Virtual Machine, Ant Colony Optimization, Data Center, Energy Consumption.
Iblal Rakha1 and Noorhan Abbas2, 1Oxford University Hospitals NHS Foundation Trust, Oxford, OX3 9DU, UK, 2University of Leeds, Woodhouse, Leeds, LS2 9JT, UK
The NHS faces mounting pressures, resulting in workforce attrition and growing care backlogs. Pharmacy services, critical for ensuring medication safety and effectiveness, are often overlooked in digital innovation efforts. This pilot study investigates the potential of Large Language Models (LLMs) to alleviate pharmacy pressures by answering clinical pharmaceutical queries. Two retrieval techniques were evaluated: Vanilla Retrieval Augmented Generation (RAG) and Graph RAG, supported by an external knowledge source designed specifically for this study. ChatGPT 4o without retrieval served as a control. Quantitative and qualitative evaluations were conducted, including expert human assessments for response accuracy, relevance, and safety. Results demonstrated that LLMs can generate high-quality responses. In expert evaluations, Vanilla RAG outperformed other models and even human reference answers for accuracy and risk. Graph RAG revealed challenges related to retrieval accuracy. Despite the promise of LLMs, hallucinations and the ambiguity around LLM evaluations in healthcare remain key barriers to clinical deployment. This pilot study underscores the im-portance of robust evaluation frameworks to ensure the safe integration of LLMs into clinical workflows. However, regulatory bodies have yet to catch up with the rapid pace of LLM development. Guidelines are urgently needed to address the issues of transparency, explainability, data protection, and validation, to facilitate the safe and effective deployment of LLMs in clinical practice.
Large Language Model Evaluation, Retrieval Augmented Generation, Clinical Question Answering, Knowledge Graphs, Healthcare Artificial Intelligence.
Salahuddin Alawadhi1 and Noorhan Abbas2, 1Salahuddin Alawadhi University of Leeds Dubai, UAE, 2Noorhan Abbas School of Computer Science University of Leeds, United Kingdom
The integration of Retrieval-Augmented Generation (RAG) with Large Language Models (LLMs) has shown potential in providing precise, contextually relevant responses in knowledge-intensive domains. This study investigates the ap-plication of RAG for ABB circuit breakers, focusing on accuracy, reliability, and contextual relevance in high-stakes engineering environments. By leveraging tailored datasets, advanced embedding models, and optimized chunking strategies, the research addresses challenges in data retrieval and contextual alignment unique to engineering documentation. Key contributions include the development of a domain-specific dataset for ABB circuit breakers and the evaluation of three RAG pipelines: OpenAI GPT-4o, Cohere, and Anthropic Claude. Advanced chunking methods, such as paragraph-based and title-aware segmentation, are assessed for their impact on retrieval accuracy and response generation. Results demonstrate that while certain configurations achieve high precision and relevancy, limitations persist in ensuring factual faithfulness and completeness, critical in engineering contexts. This work underscores the need for iterative improvements in RAG systems to meet the stringent demands of electrical engineering tasks, including design, troubleshooting, and operational decision-making. The findings in this paper help advance research of AI in highly technical domains such as electrical engineering.
Retrieval-Augmented Generation (RAG), Electrical Engineering, ABB Circuit Breakers, Chunking, Embeddings
Jonathan Bennion1, Shaona Ghosh2, Mantek Singh3, Nouha Dziri4,1The Objective AI, USA, 2Nvidia, USA, 3Google, USA, 4Allen Institute for AI (AI2), USA
Various AI safety datasets have been developed to measure LLMs against evolving interpretations of harm. Our evaluation of five recently published open-source safety benchmarks reveals distinct semantic clusters using UMAP dimensionality reduction and k-means clustering (silhouette score: 0.470). We identify six primary harm categories with varying benchmark representation. GretelAI, for example, focuses heavily on privacy concerns, while WildGuardMix emphasizes self-harm scenarios. Significant differences in prompt length distribution suggests confounds to data collection and interpretations of harm as well as offer possible context. Our analysis quantifies benchmark orthogonality among AI benchmarks, allowing for transparency in coverage gaps despite topical similarities. Our quantitative framework for analyzing semantic orthogonality across safety benchmarks enables more targeted development of datasets that comprehensively address the evolving landscape of harms in AI use, however that is defined in the future.
AI benchmark meta-analysis, LLM Embeddings, Dimensionality reduction, K-means clustering, AI safety.
Zhiyuan Liu, Computing and Communications School of Lancaster University
The essay begins by setting out a detailed scenario for the deployment of face recognition systems in public places. Based on this scenario, two statutes that companies need to focus on and a relevant legal case are critically discussed. The essay then integrates the two statutes into the scenario and makes critical recommendations for security design decisions, both managerial and technical, based on the legal requirements. The essay concludes with a summary of the findings and insights.
Network Protocols, Wireless Network, Mobile Network, Virus, Worms &Trojon.
Salahuddin Alawadhi1 and Noorhan Abbas2, 1Salahuddin Alawadhi University of Leeds Dubai, UAE, 2Noorhan Abbas School of Computer Science University of Leeds, United Kingdom
The integration of Retrieval-Augmented Generation (RAG) with Large Language Models (LLMs) has shown potential in providing precise, contextually relevant responses in knowledge-intensive domains. This study investigates the ap-plication of RAG for ABB circuit breakers, focusing on accuracy, reliability, and contextual relevance in high-stakes engineering environments. By leveraging tailored datasets, advanced embedding models, and optimized chunking strategies, the research addresses challenges in data retrieval and contextual alignment unique to engineering documentation. Key contributions include the development of a domain-specific dataset for ABB circuit breakers and the evaluation of three RAG pipelines: OpenAI GPT-4o, Cohere, and Anthropic Claude. Advanced chunking methods, such as paragraph-based and title-aware segmentation, are assessed for their impact on retrieval accuracy and response generation. Results demonstrate that while certain configurations achieve high precision and relevancy, limitations persist in ensuring factual faithfulness and completeness, critical in engineering contexts. This work underscores the need for iterative improvements in RAG systems to meet the stringent demands of electrical engineering tasks, including design, troubleshooting, and operational decision-making. The findings in this paper help advance research of AI in highly technical domains such as electrical engineering.
Retrieval-Augmented Generation (RAG), Electrical Engineering, ABB Circuit Breakers, Chunking, Embeddings
Qi Huamei1 and Md Jahangir Alam2, 1School of Electronics Information Science, Central South University, Changsha, China, 2Department of Computer Science and Technology, Central South University, Changsha, China
The rise of Non-Alcoholic Fatty Liver Disease (NAFLD), associated with obesity and metabolic disorders, underscores the importance of developing precise prediction models for early identification. This research employs machine learning and survival analysis techniques to classify and forecast NAFLD using clinical and demographic data. The examined models include Decision Tree, Extra Trees, Random Forest (utilizing 10 estimators), and K-Nearest Neighbours (with K set to 3). For data preparation, KNN imputation was applied to address missing values, and MinMax scaling was used for standardization. Lasso regression (LassoCV) was implemented to select features and highlight significant variables to enhance model efficacy. Alongside classification models, the Kaplan-Meier estimator (KaplanMeierFitter) and Cox Proportional Hazards Model (CoxPHFitter) were utilized to evaluate patient survival rates and to pinpoint risk factors. The ensemble models, specifically the Extra Trees and Random Forest classifiers, surpassed the baseline Decision Tree (88.28) and KNN (91.56) models, achieving accuracies of 92.54 and 92.63, respectively. LassoCV contributed to improved feature significance, while survival analysis offered valuable insights into the progression of NAFLD. This study showcases the efficacy of ensemble methods and survival analysis in developing reliable and interpretable prediction models for NAFLD. Future research should aim to expand the dataset and incorporate additional clinical parameters.
NAFLD Prediction, Ensemble Learning, LassoCV Feature Selection, Survival Analysis, Cox Proportional hazards, Kaplan-Meier Estimator.
Messaoud MEZATI, Siham BEGGAA, Houria BENBOUBKEUR, Chahd BRAITHEL and Malak GHOULIA, Department of Computer Science and Information Technology, Kasdi Merbah University Ouargla, Algeria
Predicting pedestrian trajectories is a key challenge in intelligent transportation systems, robotics, and urban mobility, requiring models that balance accuracy, adaptability, and interpretability. Traditional Knowledge-Based (KB) models, including social force models, agent-based simulations, and reinforcement learning, offer structured decision-making but struggle with rapidly changing and complex environments. In contrast, Deep Learning (DL) techniques, such as LSTMs, Graph Neural Networks (GNNs), and Transformers, capture intricate movement patterns but often lack transparency. This study examines the hybridization of KB and DL models, integrating physics-based constraints with data-driven learning to enhance pedestrian behavior forecasting. A systematic classification of hybrid models is provided based on model structure, prediction tasks, AI integration, and real-world applications. Additionally, the study explores the potential of Reinforcement Learning (RL), Self-Supervised Learning, and Large Language Models (LLMs) in trajectory prediction. By bridging rule-based reasoning with adaptive learning, this work contributes to the development of safer, more flexible, and explainable pedestrian trajectory prediction models for applications in autonomous navigation, smart cities, and crowd management.
Pedestrian Trajectory Prediction, Knowledge-Based Models, Deep Learning, Autonomous Driving, Explainable AI.
Shraddha Sharma1, Anjali Sharma2 and Gaurav Vishwakarma3, 1Entergy Services, Houston Texas, USA, 2MP Electricity Board, Indore, MP, India, 3Reliance Power, Lucknow, UP, India
State estimation in power grids is the process of determining the most accurate operating state of an electrical power system using available measurements from sensors and meters. State estimation helps in ensuring reliability, and efficiency in modern electrical networks. Traditional state estimation methods, such as the Weighted Least Squares (WLS) approach, often struggle with non-linearity, measurement noise, and data sparsity. Deep learning (DL) has emerged as a powerful alternative due to its ability to learn complex patterns and handle large datasets. This paper explores the application of deep learning techniques for state estimation in power grids along with comparing their performance against conventional methods.
State Estimation, Weighted Least Squares, Deep Learning, State Estimators, Power Grid, Bayesian Algorithm.
Muhammad Sarmad, Emanuele Mele, Rajat Srivastava, Marco Pulimeno, Massimo Cafaro, and Italo Epicoco, Department of Engineering for Innovation, University of Salento Lecce-73047, Italy
Accurate representation of oceanic conditions is fundamental for reliable climate modeling, weather forecasting, and environmental monitoring. However, ocean models and observational datasets often exhibit systematic biases due to limitations in model physics, parameterizations, resolution, or observational coverage. In this work, we propose a diffusion model for bias correction and we systematically evaluated its performance for Sea Surface Temperature on the ocean (SST) generation by varying different hyper-parameters in the U-Net architecture. The model is trained to denoise simulated date and reconstruct SST field guided by reanalysis data. Our results demonstrate that increasing the base channel’s depth significantly improves the model’s performance, with improvements in convergence speed, reconstruction accuracy, and spatial detail retention. Quantitative metrics such as root mean squared error (RMSE), Pearson’s correlation coefficient (PCC), and coefficient of determination (R2) show notable gains up to a base channel depth of 64, beyond which performance gains plateau. A detailed temporal generalization analysis using seasonal batches every two months confirms the robustness of the model in varying SST regimes. At the same time, qualitative visualizations show sharp and coherent reconstructions with minimal error. The study highlights the trade-off between model complexity and performance and identifies 64 base channels as a computationally efficient and accurate configuration for SST modeling using diffusion-based generative methods.
Diffusion Models, Oceanic Dataset, Architectural Parameters, Bias Correction.
Lee Seo-jun, Choi Seo-yeon, Oh Ji-soo, & Gyu Tae Bae, UC Berkeley, United States of America
South Korea, with approximately 63% of its land covered by forests, is highly susceptible to wildfires. Traditional fire detection methods—such as satellite imagery and ground-based observation—face significant limitations, including high operational costs, delayed response times, and vulnerability to weather conditions. This paper presents an efficient fire detection system for Vertical Take-Off and Landing (VTOL) Unmanned Aerial Vehicles (UAVs), utilizing Convolutional Neural Networks (CNNs). The integration of CNNs significantly improves detection accuracy, even in complex environments that challenge conventional approaches. In simulations designed to closely mimic real-world scenarios, the optimized algorithm achieved a 93% detection rate with 20% false positives and a frame latency of just 1.2 seconds. Additionally, deploying the model on a Raspberry Pi onboard a VTOL drone demonstrated its practical viability for real-time forest fire surveillance and rapid response. This study highlights the potential of drone-based, AI-powered fire detection systems as a powerful supplement to existing wildfire monitoring and prevention strategies.
Forest fire detection, Wildfires, VTOL drones, Unmanned Aerial Vehicle (UAV), Convolutional Neural Networks (CNNs), Real-time detection, False positives, Frame latency, Raspberry Pi, Onboard processing, Fire surveillance, AI-powered monitoring, Wildland fire prevention, Drone-based systems, Environmental monitoring
Sonu Kumar1, Anubhav Girdhar2, Ritesh Patil3 and Divyansh Tripathi4, 1R&D, Sporo Health, USA, 2Data Engineering and AI, Involead, India, 3Gen AI CoE, Capgemini, India, 4M.Tech in Data Science, IIT Roorkee, India.
As Agentic AI gain mainstream adoption, the industry invests heavily in model capabilities, achieving rapid leaps in reasoning and quality. However, these systems remain largely confined to data silos, and each new integration requires custom logic that is difficult to scale. The Model Context Protocol (MCP) addresses this challenge by defining a universal, open standard for securely connecting AI-based applications (MCP clients) to data sources (MCP servers). However, the flexibility of the MCP introduces new risks, including malicious tool servers and compromised data integrity. We present MCP Guardian, a framework that strengthens MCP-based communication with authentication, rate-limiting, logging, tracing, and Web Application Firewall (WAF) scanning. Through real-world scenarios and empirical testing, we demonstrate how MCP Guardian effectively mitigates attacks and ensures robust oversight with minimal overheads. Our approach fosters secure, scalable data access for AI assistants, underscoring the importance of a defense-in-depth approach that enables safer and more transparent innovation in AI-driven environments.
model context protocol, mcp, agentic ai, artificial intelligence, generative ai
Rachana S Potpelwar, U V Kulkarni, J M Waghmare, Shri Guru Gobind Singh Institute of Engineering and Technolgy, Computer Science and Engineering Department, Nanded, 431605, Maharashtra, India
Phishing attacks continue to pose a significant threat to online security, making efficient detection methods essential. This paper presents a lexical-based ap- proach for detecting malicious URLs using deep learning algorithms, including Artificial Neural Networks (ANN), Multi-Layer Perceptrons (MLP), and Long Short-Term Memory (LSTM) networks. Our dataset consists of phishing and le- gitimate URLs labeled accordingly. To enhance detection accuracy, the dataset was preprocessed using the Term Frequency-Inverse Document Frequency (TF- IDF) method, converting the raw URL strings into meaningful numerical rep- resentations. The experimental results demonstrate that preprocessing sub- stantially improves model performance. For LSTM, the accuracy improved from 90.05% (without preprocessing) to 90.77% (with preprocessing). These results highlight the effectiveness of combining lexical feature extraction with deep learning algorithms, offering a promising solution for real-time detection systems to safeguard against phishing attacks and enhance cybersecurity.”
Wallas Bruno S. Lira1, Gilton José Ferreira da Silva1, Silvio Mario Felix Dantas1, Barbara Cristina Silva Rosa2, Cassia Regina D’Antonio Rocha da Silva3, 1Departamento de Ciência da Computação – Universidade Federal de Sergipe (UFS) Cidade Univ. Prof. José Aloísio de Campos Av. Marcelo Deda Chagas, s/n, Bairro Rosa Elze São Cristóvão/SE CEP 49107-230, 2Departamento de Fonoaudiologia – Universidade Federal de Sergipe (UFS) Cidade Univ. Prof. José Aloísio de Campos Av. Marcelo Deda Chagas, s/n, Bairro Rosa Elze São Cristóvão/SE CEP 49107-230, 3Universidade Tiradentes, Universidade Tiradentes - Campus II. Av. Murilo Dantas, 300 Farolândia 49032-490 - Aracaju, SE - Brasil
The dynamic process of discovering and documenting software requirements demands effective approaches. This study explores, through a survey conducted via Google Forms using the snowball technique, the application of Design Thinking (DT) stages in Requirements Engineering (RE). Findings indicate that integrating DT phases enhances Requirements Engineering activities, though maintaining architectural quality throughout the agile lifecycle remains challenging. It concludes that the synergy among DT, Requirements Engineering, and Software Architecture significantly improves the effectiveness of agile software projects.
Design Thinking, Requirements Engineering, Software Architecture, Software Development, Design Management.
Bohdan Vodianyk1, Enrique Nava Baro2, Alfonso Ariza Quintana2, Anton Popov3, 4, 1Escuela de Ingenierías Industriales, Universidad de Málaga, Arquitecto Francisco Penalosa, 6, Malaga, 29071, Spain, 2ETSI Telecomunicación, Universidad de Málaga, Blvr. Louis Pasteur, 35, Malaga, 29010, Spain, 3Department of Electronic Engineering, Igor Sikorsky Kyiv Polytechnic Institute, Polytekhnichna Street, 16, Kyiv, 03056, Ukraine, 4Faculty of Applied Sciences, Ukrainian Catholic University, Kozelnytska Street, 2a, Lviv, 79026, Ukraine
Accurate 3D reconstruction of dental structures is crucial for orthodontic assessment and surgical planning. Traditional methods like SIFT and ORB struggle with complex dental textures. This paper proposes a pipeline using KeyNetAffNetHardNet for feature detection and matching, achieving higher robustness and 25% faster computation compared to state-of-the-art methods like LoFTR and DISK + LightGlue. To optimize 3D mesh reconstruction, Surface-Aligned Gaussian Splatting (SuGaR) enhances mesh accuracy and rendering quality, achieving SSIM up to 0.9538 and PSNR up to 28.98, improving SSIM by 10% and PSNR by 15% over conventional approaches. Experimental results demonstrate that integrating KeyNetAffNetHardNet and SuGaR delivers high-fidelity 3D dental models with improved efficiency and quality, advancing dental diagnostics and treatment planning.
3D Reconstruction, Keypoint Matching, Gaussian Splatting, Dental Imaging, Deep Learning, Computer Vision.
Copyright © ICAITA 2025