Archives

  • Explainable AI in Action: Advancing Applied Intelligence with Interpretable Insights
    Vol. 1 No. 2 – Special Issue on Explainable AI (2025)

    Guest Editor: A/Prof. Xinyu Cai

    Associate Professor, Jiaxing University

    A/Prof. Xinyu Cai, Associate Professor at Jiaxing University, holds a Ph.D. in Economics and serves as a master's supervisor. He is a Certified Information Systems Auditor (CISA) and an expert with the Ministry of Education’s Graduate Evaluation Program. His research focuses on human capital, employment and wage systems, and large-scale AI applications in sustainable development. A/Prof. Cai has led major national and provincial research projects and published over 20 academic papers, including in Nature sub-journals and top Chinese core journals. He has received multiple research awards and serves on national academic committees related to AI and human resources.

     

    Special Issue Overview:

    Since the rapid proliferation of Artificial Intelligence across myriad sectors, and particularly after the general acknowledgement of its transformative potential by mainstream discourse, the understanding of AI's decision-making processes has become a critical factor in its responsible adoption and deployment. More recent advancements, such as the remarkable capabilities of deep learning and large-scale models, further intensify this need for transparency. Consequently, stakeholders across all industries are now seeking not just "intelligent" solutions but "intelligible" ones as part of their operational strategy, from healthcare diagnostics to financial forecasting. The domain of applied AI is no exception. "Black box" models, for instance, are no longer universally accepted, as virtually every critical application now calls for explainable practices. On the other hand, the drive for innovation and competitive advantage enhances the need for increasingly complex AI. This combination of imperatives: catering to (or appearing to cater to) the demand for transparent and trustworthy AI, as well as pushing the boundaries of predictive power, is often a challenge for AI developers. Additionally, the complexity inherent in advanced AI can further hinder the achievement of accountability and ethical oversight. However, can the field of applied AI also be part of the solution and empower users and developers to make informed decisions and deploy systems that are genuinely robust, fair, and beneficial?
    The aim of this Special Issue is to explore how applied AI research is integrating explainability and interpretability to achieve these goals. This will include the development and application of novel techniques focused on making AI systems more transparent, as well as understanding the psychological and practical antecedents of trust and adoption of explainable AI. It could look at how researchers and practitioners are addressing the apparent trade-off between model complexity and interpretability, as well as the role of human-AI interaction in this context. Through the expansion of knowledge and theory, this Special Issue aims to support AI stakeholders in addressing the challenges of intelligibility more effectively and transparently. Topics may include the following:
    • Developing and applying interpretable deep learning frameworks (e.g., CNNs, RNNs, Transformers, Attention Mechanisms) for complex data analysis.
    • Innovations in explainable AI for feature engineering, dynamic feature weighting, selection, and validation.
    • The integration of causal inference and discovery (e.g., using methods like NOTEARS or SHAP values) with machine learning for robust and transparent predictions.
    • Novel applications of XAI in diverse domains such as employment market analysis, marketing optimization, financial services, healthcare, and public policy.
    • Techniques for effectively visualizing and communicating AI explanations to domain experts, policymakers, and end-users.
    • Methodologies for evaluating the effectiveness, fidelity, and real-world impact of XAI methods.
    • Strategies for bridging model-agnostic interpretability techniques with domain-specific knowledge and constraints.
    • The ethical implications, challenges of bias, and responsible deployment strategies for explainable AI systems.
    • Human-computer interaction aspects of XAI, including user trust and reliance on explainable systems.
    • Theoretical advancements in understanding the foundations of interpretability in machine learning.
    Keywords
    Explainable AI (XAI), Interpretable Machine Learning, Applied Artificial Intelligence, Feature Engineering, Causal Inference, Attention Mechanisms, Deep Learning, CNN, Transformers, SHAP values, Spatiotemporal Analysis, Employment Market Analysis, Marketing Analytics, Big Data Analytics, Decision Support Systems, Algorithmic Transparency, AI Ethics.

     

  • INNO-PRESS: Journal of Emerging Applied AI
    Vol. 1 No. 1 (2025)

    Issue 1 – Foundations of Emerging Applied Artificial Intelligence

    The Journal of Emerging Applied AI (JEAAI) is pleased to present its inaugural issue, establishing a dedicated forum for high-quality, peer-reviewed scholarship at the intersection of artificial intelligence theory and real-world application. This first issue reflects the journal’s foundational mission: to advance and disseminate research that demonstrates the transformative potential of AI technologies across sectors and disciplines.

    This opening volume features contributions that exemplify the journal’s emphasis on rigorously developed, practically deployed AI systems. The selected articles cover a spectrum of domains—including healthcare, robotics, transportation, education, and sustainability—demonstrating the breadth of AI’s impact when translated from conceptual innovation to applied implementation.

    With a commitment to methodological soundness, interdisciplinary relevance, and societal benefit, JEAAI aims to become a leading platform for scholars, practitioners, and innovators who are engaged in solving real-world problems through intelligent systems. The journal’s scope encompasses original research, technical reports, case studies, and critical perspectives, all grounded in applicability and reproducibility.

    We invite the academic and professional community to engage with JEAAI as contributors, reviewers, and readers, and to join us in shaping a future where applied artificial intelligence drives meaningful and responsible progress.