About the Journal

The Journal of Emerging Applied AI (JEAAI) is an international, open-access, peer-reviewed academic journal dedicated to showcasing the transformative impact of artificial intelligence (AI) in real-world applications. JEAAI provides a vibrant platform for researchers, practitioners, engineers, and innovators to disseminate AI-based solutions that address contemporary challenges across industries and domains.

JEAAI emphasizes the practical implementation of AI technologies rather than purely theoretical advances. The journal is committed to publishing high-quality research that bridges the gap between AI theory and deployment, driving innovation in sectors such as healthcare, education, robotics, transportation, finance, agriculture, energy, and beyond.

 

Aims and Scope

JEAAI focuses on applied artificial intelligence research and development that leads to tangible societal and industrial impact. The journal welcomes novel methodologies, robust case studies, system prototypes, data-driven insights, and interdisciplinary approaches. Our core areas of interest include (but are not limited to):

  • Healthcare and Medicine: AI for diagnosis, treatment planning, medical imaging, predictive healthcare, and health data analysis.

  • Smart Transportation and Mobility: Autonomous systems, traffic optimization, intelligent infrastructure, logistics, and smart urban mobility.

  • Robotics and Automation: AI-integrated robotics, collaborative robots (cobots), robotic perception, adaptive control, and industrial automation.

  • Education and Learning Technologies: Personalized learning systems, intelligent tutoring, automated grading, learner analytics, and educational chatbots.

  • Financial Technology (FinTech): Credit scoring, fraud detection, algorithmic trading, portfolio management, and customer behavior modeling.

  • Emerging Interdisciplinary Fields: AI for sustainability, agriculture, climate modeling, IoT (AIoT), smart manufacturing, and social computing.

JEAAI also encourages research that explores the ethical, legal, and social implications of AI adoption, including transparency, accountability, and human-AI interaction.

 

Why Publish with JEAAI?

  • Application-Focused: Emphasizes deployable AI solutions that make a measurable impact in real-life environments.

  • Rapid Editorial Process:

    • Initial Editorial Screening: 5 days

    • First Peer Review Decision: 6 weeks

    • Full Peer Review and Acceptance: 14 weeks on average

    • Online Publication Post-Acceptance: 5 days

    • Final Issue Publication: within 4 weeks

  • Global Reach and Open Access: All published articles are freely available to read and download, ensuring maximum visibility and citation potential.

  • Support for Reproducibility: Encourages authors to share datasets, code repositories, and supplementary materials to foster transparency and reproducible science.

  • Multidisciplinary Collaboration: Serves as a hub for collaboration between academia, industry, and public institutions.

  • Inclusive Publication Philosophy: Welcomes contributions from emerging scholars, industry professionals, and underrepresented regions.

Publication and Indexing Status

The Journal of Emerging Applied AI (JEAAI) is published by INNO-PRESS LIMITED, an independent academic publisher based in Auckland, New Zealand. As a newly launched journal, JEAAI is currently in the process of obtaining its International Standard Serial Number (ISSN). All published articles are nonetheless openly accessible online and adhere to rigorous editorial and peer-review standards, ensuring the journal's credibility and scholarly value from inception.

Target Audience

JEAAI is intended for a broad spectrum of professionals and academics who are engaged in AI and its application, including:

  • University researchers and graduate students in AI, computer science, data science, and engineering

  • Industry engineers, data scientists, and AI practitioners

  • Policymakers and government agencies interested in technology-driven solutions

  • Technology companies and R&D labs

  • Non-profits and NGOs applying AI in social and environmental domains

 

Types of Submissions Accepted

JEAAI publishes a wide range of contribution types to accommodate diverse research outputs:

  • Original Research Articles – Full-length papers presenting original, novel, and well-validated applied research.

  • Technical Reports – Descriptions of new systems, tools, or technologies with practical implications.

  • Case Studies – In-depth analyses of real-world AI deployments, including success stories and lessons learned.

  • Short Communications – Brief but impactful findings, prototypes, or early-stage results.

  • Review Articles – Comprehensive surveys of developments in a particular applied AI domain.

  • Perspectives & Commentaries – Expert opinions, critical reflections, and viewpoints on trends, challenges, and future directions in applied AI.

 

Our Vision

JEAAI strives to become a leading voice in applied AI research by encouraging rigorous, responsible, and impactful innovation. Our long-term vision is to foster a global community where research drives real change—where algorithms meet action, and intelligent systems enhance human well-being, organizational efficiency, and societal advancement.

We invite you to join our mission—whether as an author, reviewer, or reader—and contribute to shaping the future of applied artificial intelligence.

 

 

Current Issue

Vol. 1 No. 2 – Special Issue on Explainable AI (2025): Explainable AI in Action: Advancing Applied Intelligence with Interpretable Insights
					View Vol. 1 No. 2 – Special Issue on Explainable AI (2025): Explainable AI in Action: Advancing Applied Intelligence with Interpretable Insights

Guest Editor: A/Prof. Xinyu Cai

Associate Professor, Jiaxing University

A/Prof. Xinyu Cai, Associate Professor at Jiaxing University, holds a Ph.D. in Economics and serves as a master's supervisor. He is a Certified Information Systems Auditor (CISA) and an expert with the Ministry of Education’s Graduate Evaluation Program. His research focuses on human capital, employment and wage systems, and large-scale AI applications in sustainable development. A/Prof. Cai has led major national and provincial research projects and published over 20 academic papers, including in Nature sub-journals and top Chinese core journals. He has received multiple research awards and serves on national academic committees related to AI and human resources.

 

Special Issue Overview:

Since the rapid proliferation of Artificial Intelligence across myriad sectors, and particularly after the general acknowledgement of its transformative potential by mainstream discourse, the understanding of AI's decision-making processes has become a critical factor in its responsible adoption and deployment. More recent advancements, such as the remarkable capabilities of deep learning and large-scale models, further intensify this need for transparency. Consequently, stakeholders across all industries are now seeking not just "intelligent" solutions but "intelligible" ones as part of their operational strategy, from healthcare diagnostics to financial forecasting. The domain of applied AI is no exception. "Black box" models, for instance, are no longer universally accepted, as virtually every critical application now calls for explainable practices. On the other hand, the drive for innovation and competitive advantage enhances the need for increasingly complex AI. This combination of imperatives: catering to (or appearing to cater to) the demand for transparent and trustworthy AI, as well as pushing the boundaries of predictive power, is often a challenge for AI developers. Additionally, the complexity inherent in advanced AI can further hinder the achievement of accountability and ethical oversight. However, can the field of applied AI also be part of the solution and empower users and developers to make informed decisions and deploy systems that are genuinely robust, fair, and beneficial?
The aim of this Special Issue is to explore how applied AI research is integrating explainability and interpretability to achieve these goals. This will include the development and application of novel techniques focused on making AI systems more transparent, as well as understanding the psychological and practical antecedents of trust and adoption of explainable AI. It could look at how researchers and practitioners are addressing the apparent trade-off between model complexity and interpretability, as well as the role of human-AI interaction in this context. Through the expansion of knowledge and theory, this Special Issue aims to support AI stakeholders in addressing the challenges of intelligibility more effectively and transparently. Topics may include the following:
• Developing and applying interpretable deep learning frameworks (e.g., CNNs, RNNs, Transformers, Attention Mechanisms) for complex data analysis.
• Innovations in explainable AI for feature engineering, dynamic feature weighting, selection, and validation.
• The integration of causal inference and discovery (e.g., using methods like NOTEARS or SHAP values) with machine learning for robust and transparent predictions.
• Novel applications of XAI in diverse domains such as employment market analysis, marketing optimization, financial services, healthcare, and public policy.
• Techniques for effectively visualizing and communicating AI explanations to domain experts, policymakers, and end-users.
• Methodologies for evaluating the effectiveness, fidelity, and real-world impact of XAI methods.
• Strategies for bridging model-agnostic interpretability techniques with domain-specific knowledge and constraints.
• The ethical implications, challenges of bias, and responsible deployment strategies for explainable AI systems.
• Human-computer interaction aspects of XAI, including user trust and reliance on explainable systems.
• Theoretical advancements in understanding the foundations of interpretability in machine learning.
Keywords
Explainable AI (XAI), Interpretable Machine Learning, Applied Artificial Intelligence, Feature Engineering, Causal Inference, Attention Mechanisms, Deep Learning, CNN, Transformers, SHAP values, Spatiotemporal Analysis, Employment Market Analysis, Marketing Analytics, Big Data Analytics, Decision Support Systems, Algorithmic Transparency, AI Ethics.

 

Published: 2025-05-31

Full Issue

View All Issues