Hackers to compete for $20 million prize

Hackers

The U.S. cyber hacker challenge is a new initiative launched by the Biden administration in August 2023 to use artificial intelligence (AI) to protect critical U.S. infrastructure from cybersecurity risks. 

The challenge will offer $20 million in prize money and includes collaboration from leading AI companies Anthropic, Google, Microsoft and OpenAI, who will make their technology available for the competition. The challenge was announced at the Black Hat USA hacking conference in Las Vegas.

The competition will consist of three stages

  • Qualifying event in the spring of 2024
  • Semifinal at DEF CON 2024
  • Final at DEF CON 2025 

The competitors will be asked to use AI to secure vital software and open source their systems so that their solutions can be used widely (does that create a risk in itsellf)? The top three teams will be eligible for additional prizes, including a top prize of $4 million for the team that best secures vital software.

The challenge aims to explore what’s possible when experts in cybersecurity and AI have access to a suite of cross-company resources. The U.S. government hopes that the promise of AI can help further secure critical U.S. systems and protect Americans from future cyber attacks!

Limitations and risks using AI for security

However, there are flaws and drawbacks of using AI for cybersecurity, both for the attackers and the defenders.

  • Lack of transparency and explainability: AI systems are often complex and opaque, making it difficult to understand how they make decisions or what factors influence their outputs. This can lead to trust issues, ethical dilemmas, and legal liabilities.
  • Overreliance on AI: AI systems are not infallible and may make mistakes or produce false positives or negatives. Relying too much on AI, without human oversight or verification can result in missed threats, erroneous actions, or unintended consequences.
  • Bias and discrimination: AI systems may inherit or amplify human biases or prejudices that are present in the data, algorithms, or design of the systems. This can result in unfair or discriminatory outcomes, such as excluding certain groups of people from access to services or opportunities, or targeting them for malicious attacks.
  • Vulnerability to attacks: AI systems may be susceptible to adversarial attacks, such as data poisoning, model stealing, evasion, or exploitation. These attacks can compromise the integrity, availability, or confidentiality of the systems, or manipulate them to produce malicious outputs.
  • High cost: Developing and maintaining AI systems for cybersecurity requires a lot of resources, such as computing power, memory, data, and skilled personnel. These resources may not be easily accessible or affordable for many organizations or individual.
AI and cybersecurity systems
‘Well, what do you think of AI and cybersecurity sharing resources’? ‘Ha! playing right into our hands’.

These are some of the flaws of using AI for cybersecurity, but they are not insurmountable. With proper research, regulation, education, and collaboration, AI can be a powerful ally in enhancing cybersecurity and protecting against cyber threats – that is until it takes over, but that will never happen… will it?

Google says people should use its search engine to check whether information provided by its Chatbot, Bard, is actually accurate

Robot AI

Accuracy

According to a recent news article, Google says people should use its search engine to check whether information provided by Bard is actually accurate, as it may display inaccurate or offensive information that doesn’t represent Google’s views. Just Google views I wonder…?

Google’s UK boss Debbie Weinstein said Bard was not really the place that you go to search for specific information, but rather an experiment best suited for collaboration around problem solving and creating new ideas.

Robot AI
‘Just checking the answer with my search engine!’

Hallucinate

According to an Android Authority article, both Bard and ChatGPT can hallucinate or confidently lie when asked about obscure topics. Bard does offer a link to search results and will sometimes cite a source or two. However, Google states that Bard can even lie about its own inner workings so you cannot trust everything it says…?

Testing… 1… 2… 3…?

According to a report by Marie Haynes, Bard predicts it will generate accurate responses 85% of the time by September 2023, but in an experiment, it posted an accuracy score of 63%, meaning it had incorrect information in more than 1/3 of its responses

Early days, or habouring a problem for the future?