Artificial Intelligence (AI)
3 Strategies for Building Trust in AI-Generated Content for News and Journalism
Artificial intelligence (AI) has become an integral part of our lives, from virtual assistants like Siri and Alexa to self-driving cars and medical diagnosis. But the one area where AI has been gaining traction is the news industry, where AI-generated content can help news organizations increase efficiency and scale their output.
However, as the spread of fake news and misinformation has become a major concern for society, building trust in AI-generated content has become a critical challenge for the industry. In this article, we will explore the challenges this new technology faces and how they can be overcome.
Benefits of AI-generated news
AI-generated content can provide several benefits for the news industry, some of which include:
Efficient journalism
One major benefit of AI-generated content for the news industry is its ability to increase efficiency by automating tasks such as writing headlines, summaries, captions, and even articles. This can save time and resources, allowing journalists to focus on more in-depth reporting and analysis.
Scalability
Second, AI-generated content can help news organizations scale their output to reach a larger audience by creating personalized and localized content, as well as pieces in different languages and formats. This can help news organizations stay competitive in the rapidly changing media landscape.
Assisted research
Lastly, AI-generated content can enhance the quality and diversity of news coverage by enabling journalists to explore new angles and perspectives. AI-powered tools can analyse large amounts of data to identify emerging trends and patterns, helping journalists to uncover new stories and insights and enhance the quality of their reporting.
Challenges of AI-generated news
Although AI-generated content can provide significant benefits for the news industry, it also poses several challenges in terms of building trust in the authenticity and accuracy of the content. In the context of misinformation and fake news, readers may be skeptical of news that is generated by a machine rather than a human being.
Moreover, AI-generated content can be affected by bias and lack of transparency in the AI's training data and processes. This can lead to inaccurate or misleading content that can undermine the credibility and reliability of the news.
For example, in 2017, a fake news story created by an algorithm caused chaos in the stock market.
The story claimed that Google had bought Apple for $9 billion and was distributed by Dow Jones over various newswires. The news led to a temporary surge in Apple's stock price, but it quickly returned to normal after the truth was revealed.
This situation demonstrated the limitations of relying on bots to manage and report on financial events. It also revealed how vulnerable the stock market can be to misinformation and the potential damage that can result from algorithmic errors.
3 strategies for verifying the accuracy of AI-generated news
To build trust in AI-generated content, news organizations can implement several strategies to verify the accuracy and authenticity of the content.
The human component
To ensure the accuracy and credibility of AI-generated content, employing human editors to fact-check the content can make a significant difference in how the content is perceived by readers. Human editors can identify errors or inaccuracies in the content and ensure that it meets the ethical and professional standards of journalism.
There have already been instances where employing a human editor could have helped avoid costly mistakes. For example, CNET had to retract multiple articles generated by an internally designed AI engine covering financial services topics due to mistakes such as incorrect definitions, outdated information, and plagiarized phrases.
Similarly, Men's Journal published an article on the benefits of melatonin that contained significant errors in health content, including false claims that melatonin also contains antihistamines that can cause drowsiness.
These examples highlight the importance of thorough fact-checking and the need for human oversight in the generation of AI-generated content to ensure its accuracy and credibility.
Digital locks
To address concerns about the authenticity and ownership of AI-generated content, technical tools such as digital watermarking can help detect and verify the source and origin of the content.
Watermarking techniques can also be used to protect the rights of natural language texts by embedding unique codes that can identify the original source of a document. This can prevent unauthorized copying and distribution of the document, ensuring that the intellectual property rights of the author are protected.
New best practices
News organizations can develop best practices and guidelines for using AI-generated news in a responsible and ethical way. This can include:
Transparency in the AI's training data and processes
News organizations should be transparent about the use of AI-generated content, including how it was created and what sources were used.
Ensuring diversity and inclusivity in the content
Ensure the AI is trained on a wide range of sources and perspectives, and actively seek out underrepresented voices and perspectives to include in the content.
Providing clear attribution and citations for the sources used in the content
This can help ensure transparency and accountability, while also improving the credibility and accuracy of the news.
Future outlook
As AI-generated news become more prevalent, there are both opportunities and challenges that arise in the industry. On the one hand, AI-generated news has the potential to increase efficiency and speed in reporting, allowing journalists to focus on more in-depth reporting and analysis. It can also help to diversify the perspectives and voices represented in news coverage by providing a platform for niche topics and underrepresented communities.
However, the use of AI-generated news also raises concerns about the quality and accuracy of reporting. The algorithms used to generate news content may not be able to fully capture the nuances and context of certain stories, potentially leading to misleading or biased reporting. There are also concerns about the impact of AI on employment in the industry, as it could potentially displace human journalists and contribute to job losses.
As a result, it is important for the industry to approach the use of this technology with caution and carefully consider its impact on ethics, quality, and employment. This includes investing in research and development to improve the accuracy and quality of AI-generated news, as well as providing training and support to journalists to help them adapt to this new tool. Additionally, regulatory bodies and industry organizations can establish standards and guidelines for the use of AI-generated content, ensuring that it is used in a responsible and ethical manner.
For instance, governments could establish guidelines such as requiring transparency in the development and implementation of AI technology. These guidelines could also include regulations on the use of AI for specific purposes, such as political advertising, to ensure that it does not spread false information or manipulate public opinion.
Another key regulation could be the establishment of ethical standards that would outline the responsibilities of news organizations in ensuring that the reporting is accurate, unbiased, and inclusive. They could also require news organizations to provide clear attribution and citations for the sources used in the content, as well as to disclose any conflicts of interest related to the use of AI technology.
Challenges and Strategies for Responsible Use
AI has the potential to revolutionize the news industry by increasing efficiency, scalability, and personalization, but it also poses challenges in terms of building trust, accuracy, and transparency. To overcome these challenges, news organizations can adopt strategies for verifying the accuracy of AI-news pieces, such as using human editors, technical tools, and ethical guidelines.
As the world continues to evolve, AI-generated news will continue to play an increasingly important role in the media landscape. It is up to us, as individuals and as a society, to ensure that it is used for the greater good and that it reflects our values, beliefs, and aspirations.
In the end, the question is not whether we can trust artificial intelligence, but whether we can trust ourselves to use it wisely and for the benefit of all. So, let us embrace the power of AI and use it to build a better, more informed, and more connected world.
Contact us if you want to learn more about how generative AI can help your brand push the boundaries of its storytelling.