Connect with us

Hi, what are you looking for?

Technology

Study Reveals Limitations of LLMs in Password Cracking

Recent research from the **Future Data Minds Research Lab** in Australia has found that large language models (LLMs), such as the one that powers **OpenAI’s** ChatGPT, struggle to generate effective passwords. These findings, published in March 2024 on the **arXiv preprint server**, challenge the assumption that LLMs could be utilized for cybersecurity tasks like password cracking.

The researchers, led by **Mohammad Abdul Rehman** and **Syed Imad Ali Shah**, explored whether LLMs could create plausible passwords based on user profiles. They focused on the ability of these models to generate passwords that reflect meaningful information, such as names and dates. Their study involved creating synthetic profiles for fictitious users, which included names, birthdays, and hobbies. The team then prompted three different LLMs—**TinyLLaMA**, **Falcon-RW-1B**, and **Flan-T5**—to produce potential passwords for each profile.

LLMs Underperform in Password Generation

To evaluate the models’ effectiveness, the researchers employed standard metrics used in information retrieval, specifically **Hit@1**, **Hit@5**, and **Hit@10**. These metrics assess how accurately the models can guess passwords, or rank correct passwords among the generated options. The results were disappointing: all models achieved less than **1.5% accuracy at Hit@10**, indicating a significant shortfall in their ability to generate plausible passwords. In contrast, traditional password-cracking methods, including rule-based and combinator-based techniques, demonstrated substantially higher success rates.

The researchers noted that the LLMs often failed to produce plausible passwords for the created user profiles. As a result, the performance of these models fell short compared to established computational tools. The study highlighted key limitations in the generative reasoning of LLMs, particularly their inability to recall specific examples encountered during training and apply learned patterns to new contexts.

Insights for Future Cybersecurity Research

Rehman, Shah, and their colleagues concluded that while LLMs exhibit impressive capabilities in natural language tasks, they lack the necessary adaptation and memorization skills for effective password guessing. Their findings suggest that the current generation of LLMs is not suitable for inferring passwords, especially without fine-tuning on datasets containing leaked passwords.

This research lays a foundation for future explorations into the potential password generation capabilities of other LLMs. The authors emphasize that their study provides critical insights into the limitations of LLMs in adversarial contexts. They hope that these findings will inspire further investigation into how LLMs can be refined to enhance cybersecurity measures.

As cybersecurity threats continue to evolve, understanding the limitations of tools like LLMs is crucial for developing more robust methods to secure online accounts. By addressing these gaps, researchers aim to prevent malicious actors from successfully guessing passwords and accessing sensitive information.

You May Also Like

Technology

Tesla (TSLA) recently reported a year-over-year drop in second-quarter deliveries, yet the market responded with optimism, pushing the stock up by 5%. This unexpected...

Health

The All England Lawn Tennis Club in London experienced its hottest-ever opening day on Monday, as the prestigious Wimbledon tournament kicked off under unprecedented...

Technology

In a bold reimagining of the DC Universe, director James Gunn has introduced a significant narrative element in his latest film, which reveals that...

Entertainment

A new documentary series titled “Animals on Drugs” is set to premiere on the Discovery Channel on July 28, 2023. The three-part series follows...

Science

Look out, daters: a new toxic relationship trend is sweeping through the romantic world, leaving many baffled and heartbroken. Known as “Banksying,” this phenomenon...

Technology

Former Speaker of the House Nancy Pelosi has recently made headlines with her latest investment in the tech sector. According to official filings, she...

Sports

The Chicago Cubs will enter the National League Wild Card Series following a disappointing sweep by the Cincinnati Reds this week. This outcome not...

Entertainment

Netflix’s eagerly anticipated talent competition Building the Band is set to premiere on July 9, promising an emotional journey for viewers. This series, centered...

Entertainment

tvN’s new series, Bon Appétit, Your Majesty, has quickly captured the spotlight, dominating the buzzworthy rankings for dramas and actors this week. In its...

Politics

On August 29, 2023, U.S. Attorney General Pamela Bondi announced the immediate termination of a Department of Justice (DOJ) employee due to inappropriate conduct...

World

The first dose of the hepatitis B vaccine is recommended at birth, a practice that has come under scrutiny following recent comments by Health...

Entertainment

The upcoming premiere of the documentary Color Beyond the Lines will shed light on the critical fight for school desegregation in Western North Carolina....

World

NATO has introduced a new language manual advising its personnel to adopt gender-inclusive terms, sparking considerable debate. The manual suggests replacing traditional terms like...

Business

The city of New Orleans is exploring options for enhanced public safety through potential federal assistance, particularly in collaboration with the Louisiana National Guard....

Technology

The answer to today’s NYT Wordle, dated August 8, 2025, is the verb IMBUE. This word, which means “to fill or saturate,” features three...

Entertainment

The vibrant city of New Orleans is set to host the highly anticipated **NOCHI 2025** event, celebrating the culinary arts and the rich cultural...

Copyright © All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site.