Hackers to compete for $20 million prize

Hackers

The U.S. cyber hacker challenge is a new initiative launched by the Biden administration in August 2023 to use artificial intelligence (AI) to protect critical U.S. infrastructure from cybersecurity risks. 

The challenge will offer $20 million in prize money and includes collaboration from leading AI companies Anthropic, Google, Microsoft and OpenAI, who will make their technology available for the competition. The challenge was announced at the Black Hat USA hacking conference in Las Vegas.

The competition will consist of three stages

  • Qualifying event in the spring of 2024
  • Semifinal at DEF CON 2024
  • Final at DEF CON 2025 

The competitors will be asked to use AI to secure vital software and open source their systems so that their solutions can be used widely (does that create a risk in itsellf)? The top three teams will be eligible for additional prizes, including a top prize of $4 million for the team that best secures vital software.

The challenge aims to explore what’s possible when experts in cybersecurity and AI have access to a suite of cross-company resources. The U.S. government hopes that the promise of AI can help further secure critical U.S. systems and protect Americans from future cyber attacks!

Limitations and risks using AI for security

However, there are flaws and drawbacks of using AI for cybersecurity, both for the attackers and the defenders.

  • Lack of transparency and explainability: AI systems are often complex and opaque, making it difficult to understand how they make decisions or what factors influence their outputs. This can lead to trust issues, ethical dilemmas, and legal liabilities.
  • Overreliance on AI: AI systems are not infallible and may make mistakes or produce false positives or negatives. Relying too much on AI, without human oversight or verification can result in missed threats, erroneous actions, or unintended consequences.
  • Bias and discrimination: AI systems may inherit or amplify human biases or prejudices that are present in the data, algorithms, or design of the systems. This can result in unfair or discriminatory outcomes, such as excluding certain groups of people from access to services or opportunities, or targeting them for malicious attacks.
  • Vulnerability to attacks: AI systems may be susceptible to adversarial attacks, such as data poisoning, model stealing, evasion, or exploitation. These attacks can compromise the integrity, availability, or confidentiality of the systems, or manipulate them to produce malicious outputs.
  • High cost: Developing and maintaining AI systems for cybersecurity requires a lot of resources, such as computing power, memory, data, and skilled personnel. These resources may not be easily accessible or affordable for many organizations or individual.
AI and cybersecurity systems
‘Well, what do you think of AI and cybersecurity sharing resources’? ‘Ha! playing right into our hands’.

These are some of the flaws of using AI for cybersecurity, but they are not insurmountable. With proper research, regulation, education, and collaboration, AI can be a powerful ally in enhancing cybersecurity and protecting against cyber threats – that is until it takes over, but that will never happen… will it?

Google says people should use its search engine to check whether information provided by its Chatbot, Bard, is actually accurate

Robot AI

Accuracy

According to a recent news article, Google says people should use its search engine to check whether information provided by Bard is actually accurate, as it may display inaccurate or offensive information that doesn’t represent Google’s views. Just Google views I wonder…?

Google’s UK boss Debbie Weinstein said Bard was not really the place that you go to search for specific information, but rather an experiment best suited for collaboration around problem solving and creating new ideas.

Robot AI
‘Just checking the answer with my search engine!’

Hallucinate

According to an Android Authority article, both Bard and ChatGPT can hallucinate or confidently lie when asked about obscure topics. Bard does offer a link to search results and will sometimes cite a source or two. However, Google states that Bard can even lie about its own inner workings so you cannot trust everything it says…?

Testing… 1… 2… 3…?

According to a report by Marie Haynes, Bard predicts it will generate accurate responses 85% of the time by September 2023, but in an experiment, it posted an accuracy score of 63%, meaning it had incorrect information in more than 1/3 of its responses

Early days, or habouring a problem for the future?

AI race gathers momentum as China’s Baidu claims its Ernie Bot is Better than ChatGPT on key tests

AI Robots Chatting

Baidu said its AI system called Ernie 3.5 outperformed OpenAI’s ChatGPT and GPT4 in several key areas.

  • The Chat Bot was revealed in March 2023 and has since been publicly testing it in China. The chatbot is based on Baidu’s foundational AI model called ERNIE.
  • Baidu’s advancements underscore the intense competition taking place in the area of generative AI with technology giants in the US and China rapidly advancing their AI models.

 ERNIE Enhanced Language RepresentatioN with Informative Entities

US and China AI Bots go head to head

Ernie was first introduced in 2019, and since then, Baidu has been improving and upgrading it with new versions. The latest version, Ernie 3.5, was announced in June 2023, and it claims to outperform OpenAI’s ChatGPT and GPT 4 in several key areas

Baidu’s Ernie is an artificial intelligence (AI) model that powers the company’s chatbot service, Ernie Bot. Ernie stands for Enhanced Language RepresentatioN with Informative Entities, and it is a natural language processing (NLP) deep-learning model that can understand and generate natural language.

Trained on large data sets

Ernie 3.5 is based on Baidu’s foundational AI model, which is trained on huge amounts of data from various domains, such as news, social media, encyclopedias, books, and more. Ernie 3.5 can handle various NLP tasks, such as question answering, dialogue generation, text summarization, sentiment analysis, and more.

According to a test by the China Science Daily journal, Ernie 3.5 surpassed ChatGPT and GPT 4 in general abilities and outperformed the more advanced GPT 4 on several Chinese-language capabilities. 

ERNIE version 3.5 boosted its training and efficiency, making it faster and cheaper to upgrade to future versions. Baidu hopes that ERNIE Bot will become the next must-have app in China’s internet market, attracting users because of its natural and engaging conversations.

Intergration

Baidu has been integrating ERNIE Bot across multiple business applications, ranging from cloud computing to smart speakers. 

Chat Bot
AI Chatbot

ERNIE Bot is one of the examples of how Baidu is investing in AI technology and competing with other tech giants in the US and China. Baidu’s founder Robin Li, reportedly said that ‘foundation models are an engine driving global economic growth and represent a major strategic opportunity that cannot be missed‘.

The major BIG players, Alphabet (Google), Microsoft & META all have their own versions of AI. Hopefully it will be used ‘intelligently’.