
Projects
Operationalizing Ethical AI in Critical Environments​
​​
My work focuses on the design and implementation of Ethical and Trustworthy Artificial Intelligence (AI) across high-stakes domains such as Cybersecurity, Healthcare, and Finance.
Through a combined approach of academic research and strategic advisory services, I support the development of AI systems that meet the demands of both technical performance and ethical integrity.
​
I draw on established frameworks such as ART AI, "Accountability, Responsibility, and Transparency", and apply them within context-specific methodologies tailored to complex, real-world environments.​​​​​​​​​​​​​​​
​
​​​
These contributions span across critical domains where AI decisions carry significant operational and societal impact.
​
Development of bias-aware pipelines for AI in national security and cybersecurity applications
​
Application of human-centric design principles in AI systems for healthcare and public sector use
​
Advisory on transparent and auditable AI models in finance, aligning with evolving regulatory standards
​
​
These initiatives are informed by practical constraints—privacy, fairness, interpretability, and efficiency—to enable the responsible deployment of AI in sectors where trust and accountability are critical.
​
Through research-led consulting and domain-specific implementation, I support organizations in building AI systems that are ethical by design and effective in practice.
​​​​​​​​​​​​
Industry & Consulting Projects
Examples of Impact​
1
Responsible AI for National Threat Intelligence Systems
Domain: Cybersecurity & Critical Infrastructure Focus: Bias Mitigation, Explainability, and Ethical Deployment in Threat Detection This project addressed the integration of machine learning into a national-level threat intelligence platform tasked with analyzing real-time data from multiple security agencies. Given the system's influence on incident prioritization and escalation, ethical oversight was essential. I provided strategic input on the ethical design and evaluation of the AI pipeline, ensuring compliance with principles of Accountability, Responsibility, and Transparency (ART AI). The project involved: Bias detection and mitigation in classification models using adversarial data Development of model explainability layers to enable operational accountability Advising on ethical risk frameworks in coordination with public sector stakeholders Assessment of inter-agency data sharing protocols under privacy and fairness guidelines The result was a redesigned AI workflow that improved both operational transparency and stakeholder confidence, forming the basis for governance recommendations in future national AI deployments.
View Related Skills: Cybersecurity · Artificial Intelligence (AI) · Threat Intelligence · Machine Learning · Bias Mitigation · Explainable AI (XAI) · Ethical AI · Adversarial Data Analysis · Algorithmic Fairness · AI Governance · Public Sector Consulting · Inter-Agency Data Sharing · Data Privacy · Accountability Frameworks · Responsible AI · Transparency in AI · Risk Assessment · Stakeholder Engagement
2
Project: Ethical Oversight for Autonomous Threat Response Systems
Domain: Cybersecurity & AI Governance Focus: Real-Time Decision-Making, Human-in-the-Loop Design, and Accountability This project focused on the ethical and operational design of an AI-driven cyber defense system capable of autonomously detecting, classifying, and initiating preliminary containment of threats within critical infrastructure networks. While technically sophisticated, the system raised urgent concerns around delegated decision-making, particularly regarding proportionality of response and the potential for false positives impacting public services. My role centered on embedding ART AI principles into the system architecture and operational policy, including: Designing a decision traceability framework to ensure post-incident accountability Establishing thresholds for human-in-the-loop intervention in real-time response logic Contributing to risk modeling protocols that balanced speed, impact, and ethical oversight Advising stakeholders on transparency reporting for audit and compliance readiness The project served as a template for governance-by-design in autonomous security AI systems, and informed institutional policy for future use of automated threat response.
View Related Skills: Cybersecurity · Autonomous Systems · AI Governance · Real-Time Decision Systems · Human-in-the-Loop Design · Ethical AI · Explainable AI (XAI) · Risk Modeling · Algorithmic Accountability · Threat Detection · Security Automation · Transparency Reporting · Compliance Readiness · Decision Traceability · Responsible AI · Operational Ethics · Critical Infrastructure Protection · Policy Development · Audit Strategy
3
Cross-Border Ethical AI for Intelligence Fusion
Domain: Cybersecurity, Intelligence & International Collaboration Focus: Data Ethics, Transparency, Multi-Stakeholder Governance This project supported the design of an AI-powered intelligence fusion platform aimed at aggregating cyber threat signals from multiple national and private-sector sources across borders. While the technical goal was to enhance early warning systems, the scale and sensitivity of shared data introduced serious challenges around trust, transparency, and accountability. I was engaged to ensure that the platform’s machine learning models and data governance strategies complied with ethical expectations across jurisdictional, legal, and institutional boundaries. Key contributions included: Defining transparent model logic and audit pathways for decision-making under uncertainty Developing fairness protocols across uneven data contributors (e.g. public vs. private input) Consulting on cross-border AI compliance with GDPR, defense export controls, and ethical AI charters Creating policy templates to support informed consent, data usage transparency, and public accountability The project laid the foundation for a scalable governance model for ethical AI in transnational threat intelligence, now being considered in broader consortium-level collaborations.
View Related Skills: Cybersecurity · Intelligence Fusion · AI Governance · Ethical AI · Data Ethics · Machine Learning · Cross-Border Data Sharing · Transparency in AI · Explainable AI (XAI) · Multi-Stakeholder Governance · Compliance Strategy · GDPR · Export Control Compliance · Public Sector Consulting · Fairness in AI · Informed Consent Frameworks · Risk Communication · Algorithmic Accountability · Security Policy Development · International Collaboration
4
Transparent AI for Credit Risk Assessment
Domain: Financial Services & Regulatory Technology Focus: Model Explainability, Fair Lending, Regulatory Readiness This project addressed the deployment of machine learning models in credit scoring and loan risk assessment for a major financial institution. While the models demonstrated strong predictive performance, their black-box nature posed challenges in terms of regulatory compliance, fairness, and public trust. I was brought in to advise on ethical and governance alignment under the ART AI framework, focusing on: Designing explainability layers (e.g. SHAP-based interpretability) to clarify risk factors in credit decisions Supporting bias detection protocols in historical lending data across demographic segments Aligning model logic with emerging EU AI Act and Fair Lending guidelines Creating internal governance documentation to support regulator-facing audits and transparent model review The result was a reengineered AI risk pipeline that enhanced both operational trust and legal defensibility—positioning the client as a forward-looking institution in ethical financial AI.
View Related Skills: Financial Services · Regulatory Technology (RegTech) · Credit Scoring · Loan Risk Assessment · Ethical AI · Explainable AI (XAI) · SHAP Interpretability · Bias Detection · Fair Lending Compliance · Machine Learning · AI Governance · EU AI Act · Regulatory Readiness · Algorithmic Fairness · Risk Modeling · Transparency in AI · Responsible AI · Internal Audit Support · Data Ethics · Compliance Strategy
5
Ethical AI in Real-Time Fraud Detection
Domain: Financial Crime & Transaction Monitoring Focus: Human-in-the-Loop Design, False Positive Reduction, Model Accountability In this project, I collaborated with a digital payments provider to improve the accuracy and fairness of real-time fraud detection algorithms. With high volumes of transactions and tight latency requirements, the risk of false positives leading to account lockouts or financial harm was significant. My role focused on embedding ART AI principles to ensure responsible automated action. This included: Recommending a tiered intervention model combining automation with human oversight in ambiguous cases Proposing fairness metrics for fraud flagging across transaction types and user groups Supporting the creation of a decision traceability log for internal ethics reviews and audits Advising on consumer transparency tools, allowing users to appeal or understand flagged activity The outcome was a more equitable and explainable fraud detection framework, increasing both performance and user trust.
View Related Skills: Financial Technology (FinTech) · Fraud Detection · Transaction Monitoring · Human-in-the-Loop Design · Ethical AI · Explainable AI (XAI) · Machine Learning · Algorithmic Fairness · Decision Traceability · Model Accountability · False Positive Reduction · Consumer Transparency · Responsible AI · Risk Management · AI Governance · Internal Audit Support · User Trust · Compliance Strategy · Real-Time Decision Systems
6
AI Governance in High-Velocity Payment Platforms
Domain: Payments & Financial Compliance Focus: Algorithmic Transparency, Regulatory Alignment, Risk Controls This project involved a global payment provider implementing machine learning models to automate decision-making across its transaction lifecycle, from real-time risk scoring to merchant onboarding and transaction blocking. Given the scale and speed of these systems, concerns emerged around algorithmic opacity, cross-jurisdictional compliance, and the need for operational accountability in decision logic. I was engaged to provide strategic guidance on aligning AI systems with ethical and regulatory standards, including: Designing a governance framework for AI decision-making that spans fraud, onboarding, and compliance Supporting the development of explainability tools for both internal stakeholders and external regulators Advising on cross-border data governance protocols, ensuring alignment with GDPR and regional financial authorities Recommending ethical escalation policies for cases of model failure or overreach, including user redress channels The outcome was a structured approach to ethical AI lifecycle management in high-speed financial environments, enhancing regulatory trust and positioning the platform for long-term compliance scalability.
View Related Skills: Payments Technology · Financial Compliance · AI Governance · Algorithmic Transparency · Machine Learning · Ethical AI · Explainable AI (XAI) · Fraud Risk Scoring · Merchant Onboarding · Cross-Border Data Governance · GDPR Compliance · Model Accountability · Regulatory Alignment · Responsible AI · Risk Management · Escalation Policy Design · Compliance Strategy · Lifecycle Governance · Real-Time Decision Systems · Public Trust in AI
7
Governance Framework for Clinical AI Decision Systems
Domain: Healthcare Technology & Regulation Focus: Ethical Oversight, Explainability, Clinical Risk Management This project focused on the design and evaluation of AI systems used in diagnostic decision support for complex, multi-morbidity patient cases. These systems operate in environments of clinical uncertainty, with decisions that influence not only care plans but also liability and resource allocation. I contributed to the development of a governance model for AI adoption within hospital systems, ensuring: Traceability of AI-driven recommendations for clinical accountability Implementation of human-in-the-loop safeguards in high-risk diagnostic workflows Alignment with emerging AI in healthcare policy, including transparency and audit requirements Design of cross-functional evaluation protocols involving clinicians, data scientists, and ethicists The outcome was a replicable governance strategy for deploying ethical, defensible AI within clinical environments—bridging innovation with institutional trust.
View Related Skills Healthcare AI · Clinical Decision Support Systems · Ethical AI · Explainable AI (XAI) · AI Governance · Clinical Risk Management · Human-in-the-Loop Design · Algorithmic Transparency · Healthcare Policy Alignment · Multi-Stakeholder Evaluation · Audit-Ready AI Systems · Institutional Trust · Data Ethics · Traceability in AI · Interdisciplinary Collaboration · Responsible AI · Clinical Accountability · Regulatory Compliance · Health Informatics
8
Ethical AI for Real-Time Health Risk Prediction
Domain: Predictive Analytics in Healthcare Focus: Fairness, Risk Stratification, Operational Transparency In collaboration with a public health agency, this project involved refining a real-time risk prediction model for emergency department triage and escalation. While the system improved early identification of high-risk patients, it raised ethical questions about bias, access, and transparency. My role was to embed ART AI principles into the model development and deployment strategy: Introduced fairness auditing across demographic and socioeconomic variables Developed policy pathways for explainability at point-of-care Structured an internal review process for AI risk stratification decisions Provided advisory on data governance, patient communication, and public trust This work strengthened the platform’s position as a trustworthy decision aid, compliant with both ethical frameworks and clinical standards.
View Related Skills: Healthcare AI · Predictive Analytics · Risk Stratification · Ethical AI · Fairness Auditing · Explainable AI (XAI) · AI Governance · Real-Time Decision Systems · Emergency Care Technology · Clinical Transparency · Human-Centered Design · Patient Communication Strategy · Socioeconomic Bias Detection · Public Health Technology · Responsible AI · Data Governance · Institutional Trust · Clinical Risk Oversight · ART AI Principles
9
Ethical AI for Cancer Treatment Decision Support
Domain: Oncology Informatics & Clinical AI Governance Focus: Explainability, Shared Decision-Making, Algorithmic Accountability This project focused on the development of an AI-based decision support system for oncology treatment planning, designed to assist multidisciplinary teams in recommending personalized care pathways based on genomic, clinical, and historical treatment data. Given the life-altering consequences of these decisions and the increasing complexity of available data, the system required more than just performance, it needed clinical transparency, ethical oversight, and patient trust. I was engaged to ensure the system’s design adhered to ethical AI principles, with responsibilities including: Structuring explainability protocols for oncologists, enabling informed interpretation of AI-generated recommendations Establishing safeguards for human-in-the-loop decision-making, supporting accountability in treatment discussions Advising on risk mitigation strategies for clinical bias and over-reliance on algorithmic outputs Contributing to a governance framework to support safe deployment across cancer networks and institutional review bodies This project demonstrated how responsible AI can support and not replace, clinical judgment, and how rigorous governance can enable safe and trusted adoption in one of medicine’s most complex domains.
View Related Skills: Oncology Informatics · Clinical Decision Support · Ethical AI · Explainable AI (XAI) · Human-in-the-Loop Design · Algorithmic Accountability · AI Governance · Shared Decision-Making · Personalized Treatment Planning · Clinical Bias Mitigation · Genomic Data Integration · Clinical Transparency · Responsible AI · Interdisciplinary Collaboration · Risk Mitigation in AI · Healthcare Regulation · Institutional Review Protocols · Trustworthy AI Systems · Cancer Care Technology.
Research Contributions
10
Explainable and Robust AI for Intrusion Detection Management (IDM)
Associated with Netrity Ltd | Funded by Innovate UK Domain: Cybersecurity and Critical Infrastructure Protection Advancing secure, explainable AI solutions for critical intrusion detection and prevention in high-risk environments. This project addresses the critical need for transparent, reliable, and ethically governed AI in intrusion detection and prevention systems, particularly for organizations handling sensitive infrastructure and GDPR-regulated data. Current AI-driven IDS/IPS solutions often suffer from limited explainability, poor robustness under attack, and poor generalization beyond known datasets—creating major barriers to trust and adoption in high-risk environments. Key Contributions: Developed Explainable AI (XAI) methods for network intrusion detection, enabling transparent, accountable decision-making Enhanced robustness evaluation through orchestrated multi-source attack simulations Aligned AI model development with Ethical AI frameworks, GDPR, and the EU AI Act Designed interpretable, resilient systems to reduce operational complexity for cybersecurity teams Impact: This project reinforces my commitment to building AI systems that are secure, ethically defensible, and operationally effective within critical cybersecurity domains.
View Related Skills: Cybersecurity · Intrusion Detection Systems (IDS/IPS) · Ethical AI · Explainable AI (XAI) · Machine Learning · Adversarial Robustness · Algorithmic Transparency · GDPR Compliance · EU AI Act Alignment · AI Governance · Network Security · Responsible AI · Critical Infrastructure Protection · Multi-Source Attack Simulation · Model Interpretability · Operational Risk Reduction · Resilient AI Systems · Security Policy Design · Trustworthy AI · Data Privacy & Security
11
CYRENE – Enhancing Security and Accountability in ICT Supply Chains
Associated with Netrity Ltd | Funded by Horizon 2020 Programme (Grant Agreement No. 952690) Domain:Cybersecurity and Critical Infrastructure Protection Developing AI-driven cybersecurity solutions to strengthen resilience and regulatory accountability across critical ICT supply chains. Project Description: The CYRENE project, funded under the Horizon 2020 Work Programme, focuses on developing innovative cybersecurity and accountability solutions for ICT systems, components, and services across complex supply chains. I contributed to the Risk and Conformity Assessment (RCA) Methodology, integrating AI-driven analytics into cybersecurity services and resilience evaluation tools, to support the development of ethically governed, trustworthy supply chain infrastructures. Key Contributions: Supported the development of tools assessing security vulnerabilities and operational resilience across interconnected supply chain infrastructures Integrated AI-driven analytics for enhanced risk detection and conformity assessment Participated in the evaluation and deployment of CYRENE solutions across multiple industrial sectors Promoted alignment with EU cybersecurity regulations and ethical standards for ICT risk management Impact: This project exemplifies my commitment to building trustworthy, resilient AI-enabled systems for securing critical infrastructures and global supply chains.
View Related Skills: Cybersecurity · Supply Chain Risk Management · AI Governance · Ethical AI · Risk and Conformity Assessment (RCA) · Resilience Engineering · Explainable AI (XAI) · AI-Driven Threat Detection · ICT Infrastructure Security · Conformity Assessment · EU Cybersecurity Regulations · Horizon 2020 Research · Compliance Strategy · Operational Risk Assessment · Trustworthy AI · Cross-Sector Deployment · Vulnerability Analysis · Responsible AI · Industrial Cybersecurity · Data Security Standards
12
My Cancer mAI Care – AI-Enhanced Digital Health Support
Associated with Abertay University | Funded by Macmillan Cancer Support and DHI Scotland Applying AI and predictive modeling to personalize cancer care pathways, improve service planning, and enhance patient-centered digital health systems. Domain: Healthcare AI and Digital Health Innovation Project Description: Commissioned by Macmillan Cancer Support in collaboration with the Digital Health & Care Innovation Centre (DHI) and Abertay University, this project aimed to transform cancer care planning and resource management through the application of Artificial Intelligence and game theory principles. The initiative focused on developing a dual-interface digital platform—supporting both patients and healthcare professionals—to better understand and predict individualized service needs for cancer care. Key Contributions: Applied AI-driven modeling and predictive analytics to personalize referral pathways and optimize healthcare resource allocation Integrated patient-centered design principles to ensure usability, engagement, and individualization of services Developed secure, cloud-based platforms aligned with GDPR compliance, data privacy, and Ethical AI frameworks Supported broader public health goals by promoting transparency, trust, and explainability in digital cancer care pathways Impact: This project reinforces my commitment to responsible, patient-centered AI innovation, ensuring that digital health technologies remain ethical, interpretable, and truly beneficial to real-world patient outcomes.
View Related Skills: Healthcare AI · Digital Health Innovation · Predictive Analytics · Cancer Care Pathways · Patient-Centered Design · Ethical AI · Explainable AI (XAI) · AI-Driven Resource Planning · Referral Optimization · Public Health Technology · User Experience (UX) in Health · Responsible AI · GDPR Compliance · Cloud-Based Health Platforms · Health Informatics · Data Privacy · Interdisciplinary Collaboration · Trustworthy AI Systems · Transparency in AI · Digital Health Governance
13
AI-Driven Ambient Intelligence System for Stroke Rehabilitation
Associated with University of Strathclyde | Funded by Capita through the Data-Enabled Rehabilitation Project Domain:Healthcare AI and Digital Rehabilitation Innovation Developing an AI-powered Ambient Intelligence system to deliver personalized, ethical, and secure stroke rehabilitation in home-based settings. Project Description: Stroke remains one of the leading causes of death and long-term disability globally, with over 6.7 million fatalities annually. In the UK alone, approximately 100,000 stroke cases are reported each year, highlighting the urgent need for effective rehabilitation strategies that extend beyond clinical environments. My PhD research focused on developing an AI-powered Ambient Intelligence (AmI) system designed to address the complex, underserved needs of home-based stroke rehabilitation, under the Data-Enabled Rehabilitation Project funded by Capita. Key Challenges Addressed: The lack of continuous rehabilitation support due to socioeconomic and systemic constraints Limited personalization and human-centricity in existing rehabilitation technologies Privacy, engagement, motivation, psychological, cultural, and security concerns of stroke patients Research Contributions: Developed a multi-disciplinary Ambient Intelligence platform integrating AI, sensor technologies, and human-centered design principles Embedded ethical, transparent AI frameworks to enhance patient trust, privacy, and adaptability Pioneered a novel individualized rehabilitation model tailored to psychological, motivational, and cultural contexts Achieved academic validation through peer-reviewed publications and laid the groundwork for future patient-centered digital health systems Impact: This project exemplifies my commitment to responsible AI innovation that is both technologically advanced and ethically aligned with patient needs, privacy standards, and societal values.
View Related Skills: Healthcare AI · Ambient Intelligence (AmI) · Digital Rehabilitation · Ethical AI · Explainable AI (XAI) · Human-Centered Design · Personalized Health Technology · Sensor Integration · AI-Driven Health Monitoring · Data-Enabled Rehabilitation · Responsible AI · Patient Privacy · Adaptive Rehabilitation Systems · Stroke Recovery Technology · Interdisciplinary Research · Clinical AI Ethics · Motivation & Engagement Modeling · Algorithmic Transparency · AI in Home Healthcare · Trustworthy AI Systems
14
Resilient and Ethical AI for Ambient Intelligence and Sustainable Digital Infrastructure
Associated with University of Strathclyde and Abertay University | Funded by Saltire Emerging Researcher Scheme (Scottish Funding Council) Domain: Ethical AI, Ambient Intelligence, and Sustainable Digital Infrastructure Exploring resilient, accountable AI applications for Ambient Intelligence systems and sustainable digital infrastructures through international collaboration and innovation. Project Description: Through a research placement funded by the Saltire Emerging Researcher Scheme, I collaborated with the Foundation for Research and Technology – Hellas (FORTH) in Hellas, focusing on the intersection of Ambient Intelligence, Internet of Medical Things (IoMT), and sustainable AI for critical infrastructures. The visit supported early-stage proof-of-concept research on embedding ethical AI frameworks into smart environments — including intelligent living spaces and sustainable agricultural systems — aligning with Horizon Europe goals around soil health, food sustainability, and responsible AI deployment. Key Contributions: Investigated applications of Ambient Intelligence and AI in healthcare, elder care, and agricultural sustainability contexts Developed early-stage proposals for applying ART AI (Accountability, Responsibility, Transparency) principles to intelligent environments Supported the re-development of a Horizon Europe collaborative research proposal addressing resilience, ethics, and AI sustainability Strengthened international networks for future interdisciplinary research across AI ethics, IoT, and Human-Computer Interaction (HCI) Impact: This project reinforced my research agenda on Ethical and Trustworthy AI for complex, critical systems — expanding applications into emerging domains such as intelligent environments, sustainable agriculture, and ambient healthcare support.
View Related Skills: Ethical AI · Ambient Intelligence · Internet of Medical Things (IoMT) · Sustainable AI · Explainable AI (XAI) · AI for Agriculture · Responsible AI · Human-Computer Interaction (HCI) · AI Governance · Trustworthy AI · ART AI Framework · Intelligent Environments · Healthcare Technology · Smart Living Spaces · Cross-Disciplinary Research · International Collaboration · Horizon Europe Proposal Development · Resilient Systems · Digital Infrastructure Innovation · Societal Impact of AI
15
Prediction of Wear Rates of UHMWPE Bearings in Hip Joint Prostheses Using Support Vector Models and Grey Wolf Optimization
Applying machine learning and explainable AI methods to predict material wear behavior in biomedical implants, supporting safer and longer-lasting prosthetic designs. Domain: Biomedical AI and Materials Science Innovation Project Description: One of the critical challenges in joint arthroplasty is enhancing the wear resistance of ultrahigh molecular weight polyethylene (UHMWPE), a widely used material for acetabular bearings in total hip joint prostheses. This study developed a hybrid machine learning model using a Support Vector Machine (SVM) combined with Grey Wolf Optimization (GWO) to predict UHMWPE wear rates based on a comprehensive dataset aggregated from 29 different pin-on-disc wear experiments. In total, 129 data points were analyzed to train and validate the predictive model. Key Contributions: Built a Support Vector Machine-Grey Wolf Optimizer (SVM-GWO) hybrid model to predict UHMWPE wear rates across varying experimental conditions Applied Shapley Additive Explanations (SHAP) to interpret model predictions and identify critical influencing parameters such as radiation dose and surface roughness Demonstrated that high radiation doses (>95 kGy) and very low surface roughness (Ra
View Related Skills: Biomedical AI · Materials Science · Machine Learning · Support Vector Machines (SVM) · Grey Wolf Optimization (GWO) · Predictive Modeling · Explainable AI (XAI) · SHAP Analysis · Biomechanical Engineering · Prosthetic Design Optimization · Wear Prediction · Experimental Data Aggregation · Algorithmic Transparency · Medical Device Innovation · Computational Modeling · Responsible AI · Implant Safety · AI in Healthcare Technology · Data-Driven Materials Research