Since its debut on November 30th, ChatGPT has become the internet’s new favorite toy. The AI-driven natural language processing tool quickly amassed over 1 million users, who used the web-based chatbot for everything from generating wedding speeches and hip-hop lyrics, to crafting academic essays and writing computer code.

However, the consequences for humanity of its exacerbated use are still very unclear, and need further study. The system represents a great evolutionary leap, as it generates content in a coherent way, as well as being able to resemble a human very much.

While it still has bugs and glitches, the program has already shown strong potential. And if today the system is already seen as a success, imagine what it can do in the long term. But as modernization grows, so do fears about ChatGPT. After all, what exactly is ChatGPT? Can he really be dangerous? In what sense? Find out below.

What is the ChatGPT program?

Credit: Open AI Disclosure

ChatGPT is a virtual robot (chatbot), which answers various questions, does written tasks, chats and even gives advice. He can teach how to prepare a risotto, give tips to get a job, help with academic work, among others.

Although the program works in almost 100 languages, the performance of the model changes according to the language. In short, the program works best in English. The system was created by OpenAI, a company that has Elon Musk as a co-founder. Not even 5 days after its launch, ChatGPT reached over 1 million users. And with each interaction, the system trains and develops further.

Initially, the use of the program will be free, and open to everyone during the testing and research phase. However, Open AI warns that during this period, the software “may occasionally generate incorrect or misleading information“, and that your data history is limited to 2021.

Is ChatGPT a breakthrough for artificial intelligence?

Credit: Disclosure/Canva
Credit: Disclosure/Canva

Text-based artificial intelligence programs operate by storing large amounts of data (with an emphasis on words and conversations in this case) and with algorithms to estimate the best sequence of a sentence. They are called large language models (LLM).

Although they already use a modality that manages to infer the context of the use of words, allowing better connected texts, the previous programs were not so efficient, and gave artificial answers. ChatGPT learned to chat and now, it is more like human speech.

The program uses as a differential, a technique that understands how the language works: via reinforcement of learning by human feedback. Engineers use the reward and punishment method to teach the program the most desirable means of interaction. In addition, the program is able to admit errors, challenge incorrect assumptions, as well as reject inappropriate requests.

Is ChatGPT a threat to learning and creativity?

It is undeniable that technology is already interfering with work and employment. In short, ChatGPT’s competence in generating codes also leads to problematizations in a new sector, programming.

However, one of the areas that has been realizing the potential problems of ChatGPT is one of the most affected by the arrival of new technologies: education. This is said because students can use the program to find ready-made answers to their assignments.

In addition to the problem with copying and pasting, there is the fear of structural impacts on human learning. We are talking more specifically about the cognitive exercise of writing a text, with clear and objective ideas.

Having a machine that performs this exercise, many people may end up losing interest in doing the task. And so, requesting an artificial intelligence to do it is much simpler, and faster.

New York Department of Education Blocks AI

New York City has blocked students from accessing ChatGPT in schools. Unlocking can only occur if the school requests access. The same goes for YouTube and Facebook.

However, the release must be made for schools that wish to use the system for research. And this should help in computing projects within the school.

ChatGPT can have a positive use. For example: educators can use artificial intelligence to create examples and questions; students can have contact with a subject they want to know, or even seek help to improve a work or project.

However, we know that there can be negative use for the same program. By blocking the use of ChatGPT, the Department aims to encourage students not to become dependent and hostages of this technology. And mainly: that they make improper use of the system.

After all, learning is not just delivering the right answer, but creating skills to achieve them. And this also applies to critical thinking: writing a book summary, or a dissertation or critical analysis, involves considering several factors, ranging from the central theme to the contexts of the time. And that’s exactly where ChatGPT fails.

Lastly, ChatGPT can have problems with plagiarism. This is said because the system uses various sources over the internet. And so, the production of a text by AI can use false data, and copy complete excerpts from other people’s texts.

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply