Year end summary Development tools and assistants for large language models

Mondo Workplace Updated on 2024-01-31

This year is a big year for large language models. For developers, the impact of LLMs is mainly in the field of AI development tools and assistants.

Translated from Large Language Models in 2023: Tools and Assistants for Devs by D**id Eastman is a former London-based software professional at Oracle Corpand British Telecom and as a consultant to help teams work more agile. He wrote a book on UI design and has been writing technical articles ever since. This year has been a breakthrough year for large language models (LLMS). But developers are still in the early stages of using their full power.

I had mixed feelings when I was told that a potential interview candidate used ChatGPT while answering web-based questions. It's a bit like how students feel when they first get access to a handheld calculator. Nowadays, the use of calculators in math exams is no longer considered a problem – they can be used freely. Ultimately, arithmetic is only a small part of math, and a calculator is definitely just a tool. However, using ChatGPT as a substitute for one's own skills and knowledge may seem both unnecessary and hopeless. But this year's evidence suggests that for developers,Artificial intelligence is mainly embodied in the form of tools or assistants

The core of LLM lies in that first word: large. They can only perform so well if they absorb a lot of data. I tried using a similar technique, but without the support of neural networks, and the result did not produce any decent poetry, but did show the process of absorbing the corpus of text.

Image acquired by stable diffusion;Hint: "A giant robot shoots lasers from its eyes, and everyone is running in terror".

ChatGPT's ability to integrate information when answering queries is invaluable. (Although for answering many queries, you may find perplexity.)AI has a better experience) Many developers are attracted to the way they can quickly introduce a fully explained example into their projects. But the advantage of being a tool is that they can increase developer efficiency with minimal friction. This year is mostly about developing tools, which often include AI. Rust has played a role in building faster fullscreen tools, with Zed and Warp being strong examples. Zed is a full-featured "multiplayer" editor built for speed. Zed2 should be released soon, and I've noticed their work on tailwind completion. Warp continues to innovate on the command line. Adopting AI in tools is critical to many efforts. The first big wave came from Copilot in Visual Studio. I also took a look at RepliT Ghostwriter, as well as Codiumai's test generator. Cursor has an AI-led editor. Many of these are just AI-encapsulated requests to the samples. However, Copilot is able to complete a class method with only the signature, if the method name is fairly canonical. I'm more impressed with the ability to type directly inside the window like this, rather than typing text in a separate window. At the moment Microsoft seems to have a better direction, but this will change as other projects mature.

CodiumAI's test generator really proves that AI can work directly in the development cycle by generating sensible unit tests based on existing coding methods. There are many ancillary tasks around coding, and AI can help. This is where we need to address the different needs of dabble and professional developers, and why the current design doesn't work well for either. I candidly believe that a full development cycle is still too tricky for dabbling in, and AI tools can't change that yet.

Keep in mind that a missing quote can still cause the entire **file to fail to compile. But on the other hand, people who code every day are looking for lots of small but constant aids instead of getting big chunks of ** from the web. This intelligence can be derived from large language models, but precise control of the editor interface is crucial. Any suggestion is useless if it interrupts my workflow.

This year we've seen platforms that help developers use large language models indirectly. I think it's too early to talk about them, as further advances from OpenAI could easily diminish their utility. One example I've touched on is Fixie, which uses a ** approach to take advantage of large language models. What about writing the whole project with a large language model tool?Is the developer's job at risk?Do we all have to cheat now?

Developers have two skills that they use every day to move work forward: making connections and understanding transitions. This is almost unattainable for large language models (LLMS), but for now these are still specific human traits.

The transition of the project requires an understanding of the employees, the financial situation of the organization, the business environment, etc. Moving from a relational database to a Redis key-value store and then to a new system on the cloud might theoretically make such a suggestion, but no one would fully trust that opinion – after all, you can't fire or demote one. Making connections requires daily observation of life, but it's not technically impossible for AI to do it. Xerox Parc's WIMP visual interface was born out of the desktop metaphor that came to mind in the designer's mind. At the moment, LLMS are mostly reactive – they don't suddenly get inspired in the bathtub.

I have a bit of sympathy for those who think that artificial general intelligence (AGI) and disaster are just around the corner, simply because the power of LLMS surprises people. But in reality, there is little evidence for this. Pattern recognition is very important to us because it's how we navigate the world. But language understanding in silicon is just a tool for us. As Professor Michael Wooldridge puts it, "From any meaningful point of view, it is not aware of the world. "I'll be writing about the trends we might see in the next year in my next post.

Related Pages