In the rapid development of generative AI, many people feel confused about whether they should continue learning to code. Professor DaDa discusses the principles behind ChatGPTâs LLM in GQ magazineâs program, pointing out the limitations of Vibe Coding.
Recently, GQ Taiwan shared a video on their YouTube channel, inviting UC Berkeley computer science professor Sarah Chasins to respond to many questions from netizens about programming and AI.
Amidst the rapid growth of generative AI, many are unsure whether to keep learning to code. In the video, Professor Chasins not only explains the technical principles but also offers pragmatic observations on the recent trend of âVibe Coding.â
Professor Sarah Chasins first explains in an accessible way how ChatGPT works.
ChatGPT is built on large language models (LLMs). Its core operation is quite simple: itâs a program responsible for assembling seemingly matching words together.
Developers of LLMs first collect all human-written documents and web pages online, which represent the reasonable word combinations in human cognition.
Then, the program undergoes large-scale âfill-in-the-blankâ training. For example, the system might see a sentence like âThe dog has four [blank],â and the human-understood answer is âlegs.â If the program guesses incorrectly, developers correct it until it gets it right.
After training that takes roughly 300 to 400 years of Earthâs computational time, the program ultimately generates an extremely large âcheat sheet,â also known in the tech industry as âparameters.â
Next, providing a dialogue-formatted document allows this fill-in-the-blank expert program to transform into a chatbot, automatically completing the remaining responses to human questions based on logic.
Image source: AI-generated Nanobanana image, for reference only. Some Chinese characters may be blurry; please forgive us.
Faced with the powerful capabilities of AI tools, many question the necessity of learning to code. Professor Chasins believes that the core skill in coding education is âproblem decomposition,â meaning breaking down a vague big problem into smaller parts until each part can be solved with a few lines of code.
Without this training, users will find it difficult to produce truly functional complex programs using AI tools. Moreover, the training data for LLMs mostly consists of engineering-style language descriptions, not everyday language used by non-professionals, which often mismatches the training data and makes it hard for AI to generate useful code.
As for how to maximize the benefits of AI-assisted coding, Professor Chasins recommends following three steps:
Image source: AI-generated Nanobanana image, for reference only. Some Chinese characters may be blurry; please forgive us.
Regarding the recent trend of using LLMs to directly generate code instead of humans typing it, Professor Sarah Chasins remains cautious.
She analyzes that these tools perform reasonably well when handling routine content that has been written by humans countless times, but tend to fail when attempting anything innovative.
The professor also cites relevant research data indicating that, although users of LLM tools believe their efficiency has increased by 20%, their actual development speed is 20% slower than those who do not use such tools.
This shows that over-reliance on tools can create an illusion of efficiency. When faced with unseen programming requirements, lacking basic logic decomposition skills and understanding of physical principles makes it impossible to correct errors made by AI, leading to even more time-consuming final outputs.
To give a simple analogy, LLMs are like high-end autonomous vehicles that can handle common routes. But if you donât know how to decompose the track or understand the physical principles of vehicle operationâsimilar to programming logic decompositionâwhen encountering unfamiliar, challenging curves or innovative programming needs, autonomous driving can easily go wrong. Without fundamental skills, you wonât know how to fix it.
Further reading:
AI enables one-person companies to rise! âAtmosphere Codingâ disrupts tradition, allowing small teams to earn over a hundred million annually
Related Articles
Dogecoin Price Compresses Near $0.10 as Open Interest Drops
Dogecoin Drops 9.6% to $0.08885 as Adam and Eve Pattern Tests Key Neckline
Breakout Pushes $0.09656 DOGE Above Converging Trendlines as Price Trades Between Key Levels
Dogecoin (DOGE) to Bounce Back? This Key Emerging Fractal Chart Suggests So
Bitcoin Price News: BTC Downside Risk Grows While Pepeto Presale Hits $7.42M and Dogecoin and Solana Remain Shaky