2026 IEEE Conference on Generative AI for Secure Systems (GAISS)

28-30 October 2026

The University of Texas at Austin, 110 Inner Campus Dr, Austin, TX 78705, United States

The main objective of GAISS'2026 is to present the research from different application of AI and Cybersecurity in Business, Healthcare, Management, Environmental Science, Engineering, and Social Science. This conference provides a platform for researchers and scientists across the world to exchange and share their experiences and research results about all aspects of Business, Management, Environmental, and Social Science. This conference also provides an opportunity to interact and establish professional relations for future collaboration. The conference aims to promote innovations and work of researchers, engineers, students and scientists from across the world on Advancement in eBusiness, Management, Environmental, and Social Science. The basic idea of the conference is what more can be done using the existing technology, and resources.

Important Dates


Paper Submission 15 May 2026
Paper Acceptance Notification After review of 2-3 reviewers
Regular Registration 15 August, 2026
Conference 28-30 October 2026

Note:


Below are the topics for the conference. We welcome original contribution in the core areas of AI or Cybersecurity or applied areas of AI or Cybersecurity :

Topics


  • Track 1 : Generative AI for Threat Intelligence & Adversary Simulation
  • Track 2 : Secure and Robust Generative Models in Adversarial Settings
  • Track 3 : Generative AI for Secure Software Development, DevSecOps & Code Generation
  • Track 4 : Generative AI in Critical Infrastructure, IoT & Cyber-Physical Secure Systems
  • Track 5 : Generative AI and Quantum Machine Learning for Secure Systems
  • Track 6 : Synthetic Data, Privacy-Preservation & Federated Generative Models
  • Track 7 : Generative AI for Secure Communications, Networking and Software-Defined Infrastructure
  • Track 8 : Human–AI Collaboration, Socio-Technical Impacts & Governance of Generative AI in Secure Systems
  • Track 9 : Generative AI for Red-Teaming, Automated Attack-Surface Generation & Blue-Team Automation
  • Track 10 : Emerging Foundations: Agentic & Autonomous Generative AI Systems in Secure Environments
  • Track 11 : Foundations and theory of generative AI for secure systems
  • Track 12 : Security, robustness, and safety of generative models
  • Track 13 : Privacy-preserving generative AI
  • Track 14 : Adversarial attacks and defenses involving generative systems
  • Track 15 : Secure architectures and deployment of foundation models
  • Track 16 : Agentic AI
  • Track 17 : Detection and mitigation of AI–enabled threats
  • Track 18 : Ethical, legal, and governance considerations for secure generative AI
  • Track 19 : Traditional AI includes Machine Learning, Deep Learning, Federated Learning and so on