Hackers

Hackers to compete for $20 million prize

The U.S. cyber hacker challenge is a new initiative launched by the Biden administration in August 2023 to use artificial intelligence (AI) to protect critical U.S. infrastructure from cybersecurity risks. 

The challenge will offer $20 million in prize money and includes collaboration from leading AI companies Anthropic, Google, Microsoft and OpenAI, who will make their technology available for the competition. The challenge was announced at the Black Hat USA hacking conference in Las Vegas.

The competition will consist of three stages

  • Qualifying event in the spring of 2024
  • Semifinal at DEF CON 2024
  • Final at DEF CON 2025 

The competitors will be asked to use AI to secure vital software and open source their systems so that their solutions can be used widely (does that create a risk in itsellf)? The top three teams will be eligible for additional prizes, including a top prize of $4 million for the team that best secures vital software.

The challenge aims to explore what’s possible when experts in cybersecurity and AI have access to a suite of cross-company resources. The U.S. government hopes that the promise of AI can help further secure critical U.S. systems and protect Americans from future cyber attacks!

Limitations and risks using AI for security

However, there are flaws and drawbacks of using AI for cybersecurity, both for the attackers and the defenders.

  • Lack of transparency and explainability: AI systems are often complex and opaque, making it difficult to understand how they make decisions or what factors influence their outputs. This can lead to trust issues, ethical dilemmas, and legal liabilities.
  • Overreliance on AI: AI systems are not infallible and may make mistakes or produce false positives or negatives. Relying too much on AI, without human oversight or verification can result in missed threats, erroneous actions, or unintended consequences.
  • Bias and discrimination: AI systems may inherit or amplify human biases or prejudices that are present in the data, algorithms, or design of the systems. This can result in unfair or discriminatory outcomes, such as excluding certain groups of people from access to services or opportunities, or targeting them for malicious attacks.
  • Vulnerability to attacks: AI systems may be susceptible to adversarial attacks, such as data poisoning, model stealing, evasion, or exploitation. These attacks can compromise the integrity, availability, or confidentiality of the systems, or manipulate them to produce malicious outputs.
  • High cost: Developing and maintaining AI systems for cybersecurity requires a lot of resources, such as computing power, memory, data, and skilled personnel. These resources may not be easily accessible or affordable for many organizations or individual.
AI and cybersecurity systems
‘Well, what do you think of AI and cybersecurity sharing resources’? ‘Ha! playing right into our hands’.

These are some of the flaws of using AI for cybersecurity, but they are not insurmountable. With proper research, regulation, education, and collaboration, AI can be a powerful ally in enhancing cybersecurity and protecting against cyber threats – that is until it takes over, but that will never happen… will it?

Leave a Reply

Your email address will not be published. Required fields are marked *