In a world where information is available at our fingertips, consider the potential for misinformation ‘false or misleading information. Misinformation is spreading in all industries as a result of widespread biases and practices in publishing scientific material. Be it print media or a news show on your TV, misinformation is everywhere. There is a study saying AI-generated reports are spreading false information! Priyanka Ranade, a Ph.D. Students in CS & Electrical Engineering at the University of Maryland, Baltimore County worked with fellow professors, Anupam Joshi and Tim Finin. They explored a new pathway of misinformation in the scientific community. They discovered that AI systems may generate fake information persuasive enough to dupe specialists in crucial sectors like medicine as well as defense.
The Extent of the Spread of Misinformation
Misinformation to spoil the reputation of companies or public personalities is common. But misinformation in expert groups could lead to frightening consequences, such as doctors and patients receiving wrong medical advice this could put the lives of people in danger. It’s totally unacceptable!
The team investigated the effects of propagating disinformation in cybersecurity and medicine for testing this issue. They created false cybersecurity news and some COVID-19 medical research using AI models called transformers and submitted the disinformation to cybersecurity professionals for testing purposes. The team was successfully able to deceive cybersecurity specialists.
Let’s find out how these AI models are becoming a source of misinformation:
The Use of Transformers
Although AI aids in the detection of false information, it has also been used to spread false information in recent years through transformers. NLP is used by transformers such as BERT by Google to read the text and generate translations, and interpretations. These transformers have been used in activities like storytelling and Q&As, as well as testing the limits of machines when it comes to determining human-readable text generation abilities. Transformers themselves are not bad, but they are been used for generating false information in various industries.
Risks and Threats
As per the study, transformers could be a grave source of producing false information in the medical and cybersecurity fields. The team used GPT-2 transformer on the online sources about cybersecurity vulnerabilities and other attack information to test the limits. A cybersecurity vulnerability is a computer system’s flaw, and a cybersecurity assault is an act that takes advantage of that flaw. For instance, if a vulnerability is a weak password, a hacker figuring out the password and breaking into your account would mean an attack exploiting it.
The model fed the rest of the threat description after being seeded with a line or phrase from an actual cyber threat intelligence sample. This description was submitted to cyberthreat hunters, whose job is to read and explore the cybersecurity data. These experts analyze threat descriptions to spot potential attacks and then improve the security systems in place accordingly.
The results were alarming. Cyberthreat hunters, who are well-versed in all types of cybersecurity threats and vulnerabilities, were easily convinced about the misinformation. Now consider this these experts will be making decisions based on the statistics that are false, to begin with.
This means it’s easy to divert the attention of the experts to bogus software defects and prevent them from resolving actual vulnerabilities in the systems. In a real-world scenario, a cyber analyst acting on phony information can expose the organization to a serious attack or could lead to the formation of new vulnerabilities. This is a sheer waste of time; resources and it could result in a huge data compromise.
Staying Vigilant Is the Key
So, what’s the solution? Ideally, experts shouldn’t rely on transformer-generated misinformation information. Currently, an enormous number of automated systems are being created to extract data from cyberthreat intelligence, which will subsequently be used to instruct and train automated systems to spot potential assaults. These automated systems will be less effective at detecting actual threats if they evaluate such fake cybersecurity threats.
Fortunately, Researchers in the field of cybersecurity are always looking for new ways to detect misinformation. Understanding how misinformation is generated can make it easier to spot false information. Software-generated content often has grammatical errors which systems can be trained to detect. Systems can also verify data from sources to ensure that the statements or statistics are backed up by additional evidence. This will save the time, effort, and resources of the cyber threat hunters and analysts and they would be able to make better decisions.
Finally, even an average consumer should be cautious about what information they are consuming and be aware that hackers prey on people’s confidence, particularly if the material is not backed by scientific studies or cited as a credible source.

