ChatGPT, an AI-powered language model, has sparked conversations in the cybersecurity industry due to concerns about its potential to generate phishing emails. While experts caution against applying this technology to high-risk areas, some still worry about its impact on their job security. However, Kaspersky has conducted an experiment to determine ChatGPT’s ability to detect phishing links and its overall cybersecurity knowledge. The gpt-3.5-turbo model that powers ChatGPT was put to the test by Kaspersky, as experts tested it on over 2,000 links to see if it would identify them as phishing or not. The results varied depending on the prompt used, but ChatGPT had a detection rate of 87.2% and false positive rate of 23.2% for the question “Does this link lead to a phishing website?” The second question of “Is this link safe to visit?” had a detection rate of 93.8% but a higher false positive rate of 64.3%. Ultimately, these rates indicate that the technology is not yet suitable for production applications, but it shows promise for identifying potential phishing targets that mention popular brands to deceive users. Despite its high detection rate, ChatGPT struggled when it came to explaining whether a link was malicious or not, revealing known limitations of language models, including hallucinations, misstatements, and misleading explanations.