ChatGPT vs Google BARD: Which one is responsible AI?
Updated: Mar 15
ChatGPT is "storming" all over the world. Within two months of its release, this Artificial Intelligence (AI)-based app recorded 100 million active users, making ChatGPT the fastest growing app ever launched.
ChatGPT is just one of the typical AI Tools emerging in recent times, there are many other AI tools in other fields such as: Google Bard (AI chatbot), Stable diffusion (AI Art), Midjourney etc. The sophisticated capabilities of them attract users, but they are also wary of the tools’ potential to cause disruption in a number of industries. Nonetheless, there is one issue that has not been discussed much - is the data from these AI tools sufficiently responsible and trusted? To put it simply, they expose every Internet user to privacy threats that are fed by their own personal data.
Starting with 2 famous AI chatbots - ChatGPT and Google Bard, we will discuss the angle of responsible AI in this article.
Chat GPT and the system of 300 billion words: Whose and for whom?
ChatGPT is based on a large language model that requires huge amounts of data to operate and self-improve. The model's capacity to identify patterns, forecast what will happen next, and generate comprehensible language improves with the amount of data it has been trained on.
The company behind ChatGPT, OpenAI, gave the program 300 billion words that were routinely gathered from the Internet, including blogs, books, articles, websites, and other materials that contained personal data that was gained without the user's consent.
If you've ever published a blog entry, a product review, or a remark on an online article, it's possible that ChatGPT has used the data you generated.
Why is that a problem?
First, none of the Internet users have been asked for permission by OpenAI to use their data. This is a clear violation of privacy, especially when sensitive data can be used to identify users, family members or their location. OpenAI's usage of them might nonetheless go against what is known as text integrity even for publicly available data. In legal issues of privacy, it is vital that personal information not be released outside of its original context.
Second, OpenAI does not provide any process that allows individuals to check if the company has stored their private information, or to request deletion of such data. This “right to be forgotten” is especially important in the case of inaccurate or misleading information, and it is guaranteed under the European General Data Protection Regulation (GDPR). There are even debates about whether ChatGPT is compliant with GDPR requirements.
Third, the data exploited by OpenAI to train ChatGPT may be proprietary or copyrighted such as novels, movie scripts, poetry, research papers... When creating output, ChatGPT does not take copyright protection into account, making it possible for anyone using the application's results to unintentionally plagiarize.
Finally, OpenAI does not charge for the Internet data it collects. This is particularly noteworthy in light of OpenAI's recent launch of ChatGPT Plus, a premium subscription service that will give users priority access to new tools and ongoing access to ChatGPT. By 2024, the strategy is anticipated to generate $1 billion in income for OpenAI.
Another privacy risk associated with data provided to ChatGPT is in the form of a user request. When a person requests this tool to answer questions or perform certain tasks, they may inadvertently transfer sensitive information and put it in the public domain.
For example, a lawyer could ask ChatGPT to review a draft of a divorce agreement, or a programmer could ask the tool to examine a piece of code. The agreement and the code - along with the output - will become part of ChatGPT's database. This means they can be used to further train the tool and be included in the response to someone else's request.
Google Bard - leading to more responsible AI?
Similar to OpenAI's ChatGPT, Google Bard (dubbed Storyteller) is the latest AI-powered experimental chatbot that can respond to various inquiries and requests in a conversational manner.
Although not yet widely used, the Google Bard AI is very likely integrated into Google Search and can be made available through its search bar. It was initially offered with Google's lightweight LaMDA model, which the company claims is a smaller model that uses less computing power, allowing it to scale to more users. To ensure that Bard's responses meet high standards of information quality, security, and reasonableness, Bard plans to combine external feedback with its Internal testing.
Google has published a paper outlining the principles they follow to ensure responsible AI development, which include fairness, privacy, and safety. Additionally, the company has implemented measures such as developing bias detection tools, providing transparency into the algorithm's decision-making processes, and ensuring that the model is robust enough to perform well across different datasets. Base on Google’s responsible AI principles, Bard is considered to lead to more responsible AI in AI world, showing via 4 aspects of responsible AI
Google BARD was developed using a diverse range of data sources to reduce the risk of bias. Additionally, the model was evaluated for fairness across different demographic groups to ensure that the model's output was consistent across different groups.
Google BARD has strict privacy measures in place to protect sensitive medical data. The data used to train the model was de-identified to prevent any patient identification. The model also adheres to strict data security and privacy protocols.
Google BARD was designed with a strong emphasis on safety, as it is intended to assist in the discovery of new drugs. The model was trained on a vast amount of data to ensure its accuracy and reliability. Additionally, the model is regularly evaluated to ensure that it is performing as intended and that there are no safety issues.
Google has been transparent about the development process of BARD, publishing papers detailing the model's design, validation, and evaluation. Google also provides an open-source platform to allow other researchers to test and validate the model.
Overall, these measures show that Google BARD has been developed with responsible AI principles in mind. Nonetheless, it cannot be denied that there are some gaps in Google BARD's information veracity. Typically, Alphabet Inc (GOOGL.O) lost $100 billion in market value after Bard shared incorrect information in a promotional video and a company event failed to dazzle, feeding worries that the Google parent is losing ground to rival Microsoft Corp. In addition, the launch after ChatGPT has dominated the AI chatbot market also makes Google Bard lose a significant position in this industry. Although it has been researched by Google for a long time under the name Google Questions and Answering system, why do Google Bard still cause such problems like this? Or is it because the pursuit of the principle of Responsible AI makes Google slower than OpenAI?".
Pixta AI view of responsible AI
With the belief that a better future would be created by a better & more responsible AI development. PIXTA AI is proud to empower AI developments at many industry leaders in automotive, manufacturing, retail, banking, research institutes, etc. We bring added values & strong partnership to our clients, by the full commitment of highest quality assurance. If you want to get to know more knowledge or contribute to responsible AI, don't hesitate to contact us right today!