Featured 2023: Will ChatGPT Replace Developers?

Mondo Technology Updated on 2024-01-31

Artificial intelligence is buzzing again with the recent release of ChatGPT, a natural language chatbot that people use to write emails, poems, lyrics, and colleges**. Early adopters even used it to write python ** as well as reverse engineer shellcode and rewrite it in c. ChatGPT gives hope to those who are eager for the practical applications of artificial intelligence, but it also begs the question: will it replace writers and developers, just as robots and computers will replace some cashiers, assembly line workers, and perhaps in the future, taxi drivers.

It's hard to say how sophisticated AI text creation capabilities will get in the future, as the technology will absorb more and more writing examples. But I think it's very limited in programming. If anything, it could end up being just another tool in a developer's toolkit for handling tasks that don't require the critical thinking skills provided by software engineers.

ChatGPT has impressed a lot of people because it does a great job of simulating human conversation and sounds knowledgeable. Developed by OpenAI, creator of the popular text-to-image AI engine Dall-E, it is powered by a large language model that has been trained on large amounts of text scraped from the internet, including repositories. It uses algorithms to analyze text, while humans fine-tune the system's training to answer user questions in complete sentences that sound like humans have written.

But ChatGPT is also flawed, the same limitations that hinder it from being used to write content, and also cause it to be unreliable when creating. Because it's based on data, not human intelligence, its sentences sound coherent but don't provide critical, informed responses. It also repurposes offensive content such as hate speech. The answer may sound reasonable, but it can be very inaccurate. For example, when asked which of the two numbers 1,000 and 1,062 is greater, ChatGPT will confidently give a well-reasoned answer: 1,000 is bigger.

OpenAI's provides an example of using ChatGPT to help with debugging. The response is generated based on the previous ** and lacks the ability to replicate a human-based QA, which means that it may be generated with errors and errors. OpenAI admits that ChatGPT "sometimes writes answers that sound plausible but are incorrect or meaningless." That's why it shouldn't be used directly in the making of any program.

The lack of reliability has caused problems for the developer community. Stack Overflow is a Q&A that programmers use to write and troubleshoot it, which is temporarily banned from use, saying that ChatGPT generates such a large volume of responses that it is impossible to keep up with quality control, which is done by ChatGPT. Mankind. "Overall, because the average rate of getting the right answer from ChatGPT is so low, posting answers created by ChatGPT can be very harmful to ** and users who ask or find the correct answer. ”

Coding errors aside, since ChatGPT, like all machine learning tools, is trained on data (in this case, textual in nature) that is appropriate for its results, it lacks the ability to understand the human computing environment for good programming. Software engineers need to understand the intended purpose of the software they are creating and the people who will use it. Good software can't be built by piecing together programs with introspective **.

For example, ChatGPT can't understand ambiguity in simple needs. Although it's clear that if one ball just bounces up and back, and the other bounces and then bounces again, the second ball moves farther, ChatGPT struggles to handle this nuance;This nuance is needed if these systems are to be taken over from the developers.

It also had trouble with basic math, such as when it was asked to determine which was greater and to provide a choice between negative and positive numbers. ChatGPT confidently tells us the correct sum of spaces, but can't understand that -5 is less than 4. Imagine your thermostat is out of control because the heating starts at 40 degrees Celsius instead of -5 degrees Celsius because the AI program has coded it the way!

Pre-trained AI** generation also raises a number of legal questions regarding intellectual property rights;At the moment, it is not possible to distinguish between restrictive or open licenses. If AI borrows pre-written lines from copyrighted repositories, this could put people at risk of licensing compliance. The issue has already sparked a class action lawsuit against GitHub Copilot, another OpenAI-based product.

We need humans to create the software that people rely on, but that's not to say that AI doesn't have a place in software development. In the same way that security operations centers use automation for scanning, monitoring, and basic incident response, AI can serve as a programming tool for handling lower-level tasks.

To some extent, this has already happened. GitHub Copilot allows developers to use ChatGPT to improve**, add tests, and find bugs. Amazon offers CodeWhisperer, a machine language-driven tool designed to help increase developer productivity using natural language annotations and generated suggestions in an integrated environment. Someone has created a Visual Studio extension to use with ChatGPT.

A company is testing artificial intelligence for developers. Deepmind, which shares a parent company with Google, released its own generation tool called AlphaCode earlier this year. Earlier this month, DeepMind published the results of a mock evaluation of the CodeForces platform competition in the journal Science, titled "Machine Learning Systems Can Be Programmable." In addition to title syntax, AlphaCode ranked in the top 54% of participants by addressing "problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding." "The development of such coding platforms can have a huge impact on programmer productivity," the summary says. It may even change the culture of programming by shifting human work to formulating problems, which machine learning ......Responsible for build and execution.

Machine learning systems are getting more advanced every day;However, they are unable to think like the human brain. This has been the case for more than 40 years of AI research. While these systems can recognize patterns and increase productivity for simple tasks, they may not always be as productive as humans**. Before we let computers massively generate**, we should probably see systems like Alphacode rank in the top 75% of participants on platforms like CodeForces, although I'm concerned that might be too much for such a system. At the same time, machine learning can help solve the simple programming problems of the future, allowing tomorrow's developers to think about more complex problems.

Currently, ChatGPT is not disrupting any field of technology, especially software engineering. Fears that robots will replace programmers are overblown. There will always be tasks that can be done by developers with human cognition that machines can never do.

Related Pages