top of page
Non -Human Identity
As organizations digitize and scale, the landscape of identity security has grown more complex. Non-Human Identities (NHI) — such as service accounts, machine-to-machine interactions, IoT devices, and bots — now outnumber human users in many environments. These identities, which operate at high scale and speed, are critical for automation, efficiency, and innovation but are also vulnerable to exploitation if not managed securely.
This theme challenges participants to develop holistic solutions that address the following dimensions:
1. Unified Identity and Access Management for NHI:
Build frameworks that manage identities across humans and NHIs with equal rigor, ensuring consistent security policies across both. Develop tools that dynamically assign, monitor, and revoke permissions for NHIs based on context, reducing the risk of privilege misuse. Enable the discovery and cataloging of all NHIs in an organization, ensuring comprehensive visibility.
2. Behavior Analysis and Anomaly Detection:
Create AI/ML models to detect unusual or suspicious activities associated with NHIs, such as bots performing unauthorized actions or APIs exceeding normal usage patterns. Build systems that provide real-time alerts for NHI-related anomalies, with clear remediation steps.
3. Securing High-Scale NHI Interactions:
Design mechanisms to secure machine-to-machine (M2M) communication across microservices, APIs, and IoT devices, even under high transaction loads.
Propose solutions for encrypted communication between NHIs, ensuring data integrity and confidentiality.
4. Automating Identity Lifecycle Management:
Develop automated tools for NHI lifecycle management, including creation, rotation, and deactivation of non-human credentials (e.g., API keys, certificates, tokens). Introduce zero-trust principles for NHIs, such as least privilege access and continuous validation of their activities.
5. Monitoring and Governance:
Propose governance models to ensure NHIs comply with internal security policies and external regulations. Build dashboards that provide decision-makers with a holistic view of identity security gaps, integrating human and non-human risks.
6. Resilience Against Identity-Based Attacks:
Develop tools to mitigate identity-related threats, such as credential stuffing, NHI impersonation, or token hijacking. Use advanced threat modeling to anticipate and prevent potential identity-related exploits targeting NHIs.
7. Integrating NHI Security into DevOps Pipelines:
Ensure that identity security for NHIs is embedded into CI/CD pipelines, providing seamless integration for developers. Automate security checks for NHIs during application deployment, identifying risks before production.
AI Risk Mitigation
1.Misinformation Detection and Prevention:
AI-Driven Fact-Checking Systems: Build tools that automatically validate AI-generated content against trusted data sources to identify and flag misinformation.
Content Verification Pipelines: Develop mechanisms to track the provenance of AI-generated content, ensuring traceability and authenticity.
Real-Time Monitoring: Create solutions to monitor and flag the dissemination of misinformation across social media platforms and other public domains.
2. Combating Deep Fakes:
Detection Algorithms: Design advanced AI models that can detect deep fakes in images, videos, and audio with high accuracy.
Watermarking Solutions: Introduce robust watermarking techniques for AI-generated content to distinguish authentic creations from manipulated ones.
Public Awareness Tools: Build interactive platforms that educate users on how to recognize deep fakes and understand their implications.
3. Enhancing AI Output Accuracy and Quality:
Bias Mitigation: Develop frameworks to detect and eliminate biases in training datasets, improving fairness in AI predictions and decisions.
Quality Assurance Models: Create AI systems that validate the accuracy of outputs in domains like healthcare, finance, and public safety.
Continuous Learning Pipelines: Implement solutions that allow AI systems to learn from their mistakes, reducing errors over time.
4. Ensuring Ethical AI Practices:
Transparency Frameworks: Build tools that provide explainability for AI decisions, helping users understand why specific outputs were generated.
Regulatory Compliance Tools: Design systems that enforce compliance with AI ethics guidelines and local regulations.
Ethical Governance Models: Propose governance structures for organizations to manage and review AI outputs for unintended consequences.
5. Scalable Risk Mitigation for AI in Critical Domains:
Healthcare and Diagnostics: Create mechanisms to ensure AI-generated medical predictions are accurate, avoiding misdiagnoses or harmful recommendations.
Finance and Credit Decisions: Build solutions to detect inaccuracies in AI models used for credit scoring, fraud detection, or risk analysis.
Public Safety Applications: Develop tools to monitor AI usage in law enforcement or surveillance, ensuring ethical and accurate deployment.
6. Real-Time Sanity Checks for AI Outputs:
Sanity Monitoring Tools: Design systems that analyze AI outputs for coherence, alignment with objectives, and alignment with human oversight.
Feedback Loops: Develop feedback systems that incorporate user reviews and corrections to refine AI outputs continually.
Threshold Validation: Introduce fail-safes that prevent AI from generating content or decisions beyond acceptable risk thresholds.
7. Cross-Sector Collaboration and Standardization:
Industry-Specific Standards: Create guidelines and benchmarks for AI risk management tailored to specific sectors (e.g., media, education, defense).
Collaborative Platforms: Build systems that facilitate collaboration between organizations to share knowledge, tools, and data for mitigating AI risks.
Global AI Oversight Networks: Propose solutions for global cooperation in monitoring and mitigating risks from AI usage.
8. Public Trust in AI:
Trust-Building Initiatives: Develop campaigns or platforms to build public trust in AI by demonstrating its safety, reliability, and ethical use.
Human Oversight Interfaces: Create user-friendly dashboards that allow humans to oversee and intervene in AI operations when necessary.
AI Literacy Tools: Provide educational tools to help individuals and organizations understand AI risks and safeguards.
Securing AI Software Supply Chain
​​
​
​
As organizations increasingly integrate AI into their operations, the AI software supply chain becomes a critical security focus. From model training and deployment to governance and operational use, every aspect of the AI lifecycle is susceptible to threats such as model tampering, data poisoning, and unauthorized access. This theme challenges participants to design robust solutions that safeguard the integrity, confidentiality, and availability of the AI framework.
1. Protecting AI Models Against Tampering and Theft:
Model Integrity Verification: Develop cryptographic techniques (e.g., hashing, digital signatures) to ensure AI models remain unaltered from development to deployment.
Model Access Control: Implement strong authentication and authorization mechanisms to restrict access to AI models during training and usage.
Model Watermarking: Introduce invisible watermarks into AI models to prevent intellectual property theft and detect unauthorized use.
2. Securing AI Training Pipelines:
Data Poisoning Prevention: Build tools to monitor and sanitize training datasets, ensuring they are free from malicious inputs that could skew AI performance.
Training Environment Isolation: Design secure environments for model training, using techniques like sandboxing or containerization to prevent external interference.
Auditable Training Logs: Create systems that log and verify every step of the training process, ensuring accountability and transparency.
3. Governance and Policy Enforcement:
Model Governance Frameworks: Develop tools that track AI model versions, usage policies, and compliance with ethical standards.
Policy-as-Code for AI Security: Introduce infrastructure that automatically enforces security policies across the AI supply chain.
AI Risk Assessment Tools: Build solutions to continuously assess risks associated with AI usage, including third-party dependencies.
4. Securing Third-Party Dependencies:
Dependency Analysis: Design tools to analyze and verify the integrity of third-party libraries, frameworks, and pre-trained models used in AI development.
Supply Chain Risk Mitigation: Build systems to monitor vulnerabilities in third-party AI components and provide actionable remediation steps.
SBOM (Software Bill of Materials) for AI: Develop automated tools to generate SBOMs that include details of datasets, algorithms, and external libraries used.
5. Safeguarding AI Operational Use
Runtime Security for AI Models: Implement runtime monitoring solutions to detect anomalous behaviors or unauthorized queries to deployed models.
Secure API Gateways: Build secure interfaces for accessing AI services, ensuring protection against injection attacks, data leakage, and unauthorized requests.
Real-Time Model Auditing: Create systems that continually validate deployed AI models for performance, fairness, and security compliance.
6. Resilience Against Adversarial Attacks:
Adversarial Example Mitigation: Develop AI models that can recognize and neutralize adversarial inputs designed to manipulate their outputs.
Robustness Testing Tools: Build platforms for stress-testing AI models against a wide range of attacks, from adversarial examples to system overloads.
Model Recovery Solutions: Propose mechanisms to recover and restore models quickly after a security breach or failure.
7. Secure Collaboration in Multi-Party AI Development:
Federated Learning Security: Design tools to protect sensitive data during federated learning by leveraging techniques like differential privacy or homomorphic encryption.
Data Sharing Safeguards: Build secure platforms for sharing training data across organizations without exposing raw datasets.
Collaboration Tracking: Develop systems to track contributions and changes made by multiple stakeholders in a collaborative AI project.
8. Regulatory Compliance and Ethical AI:
Compliance Automation: Create tools that ensure AI supply chain processes align with global data protection laws (e.g., GDPR, HIPAA).
Ethics Validation: Build frameworks to evaluate AI models for alignment with ethical guidelines before deployment.4
Explainable AI Governance: Design systems that provide transparency into AI decision-making, enabling organizations to demonstrate accountability.
9. End-to-End Visibility and Monitoring:
AI Security Dashboards: Create unified dashboards that give organizations a holistic view of risks across the AI software supply chain.
Threat Intelligence for AI: Build tools to gather and share insights on emerging threats specific to AI supply chains.
Continuous Monitoring and Response: Develop solutions for real-time monitoring of the entire AI lifecycle, coupled with automated threat response capabilities.
AI Usage in Public Sector
The public sector and critical infrastructure are prime targets for cyberattacks, disruptions, and inefficiencies. Leveraging AI can revolutionize security, resilience, and operational effectiveness in areas such as energy, transportation, healthcare, and public services. This theme challenges participants to create innovative AI-driven solutions that safeguard public sectors while maintaining trust, privacy, and operational integrity.
1. Enhancing Cybersecurity for Critical Infrastructure:
Threat Detection and Response: Build AI-powered systems that detect and respond to cyber threats in real time, ensuring the resilience of essential services like energy grids, water systems, and transportation networks.
Anomaly Detection in SCADA Systems: Design AI models that identify irregularities in Supervisory Control and Data Acquisition (SCADA) systems, preventing disruptions or sabotage.
Incident Prediction Tools: Create predictive AI solutions that forecast potential vulnerabilities or threats to critical infrastructure.
2. AI for Emergency Management and Disaster Recovery:
AI-Driven Crisis Coordination: Develop AI systems that streamline resource allocation and communication during emergencies such as natural disasters or infrastructure failures.
Predictive Risk Assessment: Build tools that use AI to model the impact of disasters, helping governments prepare more effectively.
Automated Recovery Planning: Design systems that generate recovery strategies for critical infrastructure based on real-time data and historical patterns.
3. Securing Public Data and Services:
AI for Citizen Data Protection: Create tools to safeguard sensitive citizen information, ensuring compliance with privacy laws while enabling efficient service delivery.
Fraud Detection in Public Programs: Build AI systems that detect and prevent fraud or misuse of public resources, such as welfare programs or tax systems.
Secure Data Sharing: Propose AI-driven platforms that allow public sector organizations to share data securely across departments or agencies.
4. Safeguarding Physical Infrastructure with AI:
Smart Monitoring Systems: Design AI-powered solutions that monitor bridges, dams, pipelines, and other infrastructure for signs of wear, damage, or tampering.
AI in Surveillance and Perimeter Security: Build intelligent surveillance systems that identify potential threats while respecting privacy concerns.
AI for Infrastructure Optimization: Develop tools to optimize energy usage, reduce maintenance costs, and enhance the reliability of critical systems.
5. AI-Powered Public Safety and Law Enforcement:
Crime Prediction and Prevention: Create AI tools to analyze crime patterns and suggest strategies for proactive law enforcement.
Forensic Analysis Automation: Build AI solutions that assist in processing and analyzing evidence more efficiently.
Ethical Surveillance Systems: Develop AI models that enhance public safety through surveillance while adhering to strict ethical and privacy standards.
6. Protecting AI Systems in Public Sector Applications:
Securing AI Models and Data: Build solutions to safeguard AI systems used in the public sector from adversarial attacks, model theft, and data breaches.
Governance Frameworks for AI: Propose models for transparent and ethical governance of AI systems deployed in critical infrastructure.
Real-Time Monitoring and Audit: Create tools that continuously monitor and audit AI systems to ensure compliance with regulations and policies.
7. Improving Public Health and Safety with AI:
AI for Disease Outbreak Prediction: Develop tools that analyze health data to predict and respond to disease outbreaks or pandemics.
Emergency Health Response Systems: Design AI-driven solutions that optimize emergency medical services and disaster health management.
Health Data Insights: Build tools that help public health officials analyze trends and make data-driven decisions for better resource allocation.
8. Transparent and Accountable AI Usage:
Explainable AI for Public Decisions: Design AI systems that provide clear, understandable reasoning behind decisions affecting citizens.
Ethics-Driven AI Governance: Build frameworks to ensure AI deployments in the public sector are transparent, unbiased, and fair.
Citizen Engagement Platforms: Create AI tools that involve citizens in decision-making, such as participatory budgeting or urban planning.
9. Optimizing Public Sector Efficiency:
AI for Smart Cities: Develop systems that enhance urban planning, traffic management, and public transport using AI-driven insights.
Predictive Maintenance for Public Assets: Build tools to anticipate maintenance needs for public assets, reducing downtime and costs.
AI in Administrative Tasks: Propose AI solutions to automate bureaucratic tasks, allowing public servants to focus on high-impact work.
Confidential Computing
1. Secure AI Model Training:
Privacy-Preserving Training: Build tools that use secure enclaves or homomorphic encryption to enable collaborative training of AI models without exposing raw data.
Federated Learning with Confidentiality: Develop solutions that combine confidential computing and federated learning to train models across multiple organizations while keeping data isolated.
Secure Parameter Sharing: Create mechanisms to securely exchange model parameters during distributed training, ensuring integrity and confidentiality.
​
2. Protecting Customer Personal Data:
Real-Time Data Protection: Design systems that process customer data securely in memory, preventing leaks or unauthorized access during computation.
Tokenization and Anonymization: Build AI-driven tools that anonymize or tokenize sensitive data before it enters processing pipelines while maintaining its usability.
Compliance Automation: Develop solutions that ensure secure data processing adheres to privacy regulations like GDPR, HIPAA, or CCPA.
3. Enabling Secure Multi-Party Collaboration:
Secure Data Sharing Platforms: Create frameworks that allow multiple organizations to collaborate on sensitive data analysis without exposing underlying datasets.
Cross-Border Data Processing: Design systems that enable secure data processing across jurisdictions with differing data sovereignty laws.
Zero-Trust Collaboration: Build tools that enforce zero-trust principles in multi-party computations, ensuring no participant has unnecessary access to raw data.
4. Enhancing Cloud Security with Confidential Computing:
Trusted Execution Environments (TEEs): Develop applications that leverage TEEs to isolate and protect sensitive workloads in the cloud.
Secure Data Migration: Build tools for securely transferring sensitive data between on-premise and cloud environments using confidential computing techniques.
Encryption in Use: Implement solutions that enable end-to-end encryption for data during its entire lifecycle—at rest, in transit, and in use.
5. Advancing Confidential AI Inference:
Protected AI Predictions: Design systems that allow sensitive data to be processed by AI models securely, ensuring both inputs and outputs remain private.
Secure Model Serving: Develop mechanisms for deploying AI models in confidential environments, preventing adversaries from accessing model logic or sensitive inputs.
Encrypted Queries: Build tools that enable secure querying of AI models without exposing either the query data or model predictions.
6. Safeguarding Financial and Healthcare Data:
Secure Payment Processing: Create solutions that use confidential computing to protect sensitive financial data during payment authorization and processing.
Privacy-Preserving Health Analytics: Build platforms for secure health data analysis, enabling insights without violating patient privacy.
Fraud Detection in Trusted Environments: Develop AI-powered fraud detection systems that operate securely within confidential computing environments.
7. Monitoring and Governance for Confidential Computing:
Visibility into Encrypted Workloads: Create tools that monitor the performance and integrity of confidential computing environments without compromising data security.
Governance Dashboards: Build dashboards that provide organizations with insights into how confidential computing is being used, ensuring transparency and compliance.
Auditable Security Controls: Propose solutions that log and verify secure processing events, enabling organizations to demonstrate compliance with regulatory requirements.
8. Confidential Computing for IoT and Edge Devices:
Secure Data Processing on Edge: Design systems that enable secure computation of sensitive data on IoT devices or edge networks using lightweight confidential computing techniques.
End-to-End Encryption in Edge AI: Build tools that protect sensitive data collected by edge devices, ensuring secure transmission and processing.
IoT Data Privacy Frameworks: Create solutions to protect sensitive data in IoT environments, from collection to analysis and reporting.
9. Confidential Computing in Analytics and Decision-Making:
Secure Data Aggregation: Build tools to aggregate and analyze sensitive data across organizations while preserving individual privacy.
AI-Driven Risk Assessment: Develop confidential computing solutions for securely assessing risks in areas like insurance, credit scoring, or fraud detection.
Encrypted Decision Systems: Create systems that perform secure computations for high-stakes decisions without exposing sensitive inputs.
10. Educating and Empowering Organizations
Developer Toolkits: Design developer-friendly SDKs or APIs that simplify the adoption of confidential computing for secure application development.
Training and Awareness Programs: Create educational resources to help organizations understand and implement confidential computing principles.
Simulation Environments: Build platforms that allow organizations to test confidential computing solutions in controlled environments before full-scale deployment.
Agentic AI
The complexity of modern cybersecurity, application security (AppSec), and privacy challenges demands innovative solutions to scale expertise and streamline operations. Agentic AI represents a transformative approach, deploying specialized AI agents that mimic or enhance the capabilities of human security analysts, engineers, and compliance officers. These agents can act as virtual Tier 1 SOC analysts, AppSec engineers, or privacy advisors, making security operations faster, more efficient, and more precise.
This theme challenges participants to develop intelligent, autonomous AI agents tailored for security use cases, empowering organizations to proactively defend against threats, improve software security, and uphold data privacy and compliance.
1. AI Agents for Security Operations Centers (SOC):
Tier 1 SOC Analyst AI: Build AI agents that handle routine SOC tasks such as:
Analyzing security alerts from SIEM systems and filtering false positives. Prioritizing threats based on risk and relevance to the organization. Escalating critical incidents with detailed contextual analysis.
Automated Threat Hunting: Design agents that continuously scan for unusual patterns, leveraging behavioral analytics and threat intelligence to detect advanced persistent threats (APTs).
Incident Response Orchestration:
Create agents capable of automating initial response actions (e.g., isolating compromised endpoints, applying firewall rules) to minimize threat impact.
2. AI-Driven Application Security Agents:
Threat Modeling Assistant:
Develop AI agents that assist engineers in identifying potential vulnerabilities during the design phase, providing real-time suggestions for secure architecture.
Code Security Auditors:
Create agents that perform continuous static and dynamic code analysis, identifying vulnerabilities and suggesting remediations with minimal false positives.
Real-Time Security Feedback in CI/CD:
Build agents that integrate into CI/CD pipelines, offering immediate feedback on code changes that introduce security risks, ensuring secure deployment.
3. Privacy and Compliance AI Agents:
AI Privacy Assistant:
Design agents that analyze data flows, identify sensitive information, and ensure compliance with privacy laws such as GDPR, CCPA, or HIPAA.
Data Subject Request (DSR) Automation:
Create agents that handle user requests for accessing, deleting, or modifying their personal data in compliance with regulatory requirements.
AI for Privacy Risk Assessment:
Build tools that assess privacy risks in business processes or third-party integrations and recommend mitigations.
4. Threat Intelligence and Prediction
Real-Time Threat Intelligence Agents:
Design AI agents that aggregate, analyze, and contextualize threat intelligence feeds, providing actionable insights to security teams.
Proactive Risk Management:
Create agents that predict potential attack vectors based on organizational vulnerabilities, industry trends, or attacker behaviors.
Supply Chain Security Monitoring:
Build AI agents that monitor dependencies, vendors, and third-party software for emerging threats or vulnerabilities.
5. Cross-Functional AI Security Agents
Virtual Security Coaches:
Develop agents that guide non-technical employees through secure practices, such as recognizing phishing attempts or safeguarding credentials.
Compliance Workflow Automation:
Create agents that automate routine compliance tasks, such as generating audit reports or managing security certifications.
Security Policy Enforcers:
Build tools that monitor systems for policy violations, provide real-time alerts, and enforce compliance without human intervention.
6. AI Agents for Security Metrics and Reporting:
Automated Security Insights Generator:
Design agents that produce detailed yet digestible reports for leadership, summarizing incidents, vulnerabilities, and risk trends.
Board-Level Risk Presentations:
Build AI tools capable of translating technical security data into business-focused insights for executive stakeholders.
Continuous Security Posture Evaluation:
Create agents that provide real-time dashboards reflecting the organization’s security posture and progress over time.
7. Training and Upskilling with AI Agents:
AI Mentors for SOC Analysts:
Develop agents that provide guidance and real-time recommendations for junior SOC analysts, helping them learn on the job.
Gamified Security Simulations:
Create AI-powered environments for simulating attacks and defenses, allowing teams to test their skills and strategies.
Personalized Learning Agents:
Build AI agents that recommend tailored learning paths based on an individual’s role and performance in security tasks.
8. Ethical and Responsible AI Agent Use:
Explainable Security Decisions:
Design agents that clearly explain their reasoning behind decisions, such as escalating an alert or recommending a fix.
Bias Detection and Mitigation:
Create tools that ensure AI agents do not inadvertently introduce bias, especially in privacy or compliance contexts.
Accountable AI Frameworks:
Build systems that log and validate every action taken by AI agents, ensuring accountability and traceability.
9. Customizable and Scalable AI Agents
Domain-Specific Customization:
Design agents that can be tailored for specific industries, such as healthcare, finance, or manufacturing, to meet unique security challenges.
Multi-Agent Collaboration:
Build frameworks for multiple AI agents to collaborate, sharing insights across SOC, AppSec, and compliance domains for comprehensive security.
Scalability for Large Enterprises:
Create solutions that allow AI agents to operate effectively across distributed environments and large-scale infrastructures.
bottom of page