
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- AI Dev, DeepLearning.ai’s AI conference makes its debut in NYC.
- We sat down with Andrew Ng at the event to talk AI and developers.
- Ng recommends that everyone learn to code.
second annual oh godA summit on all things AI and software, organized by Andrew Ng’s DeepLearning.ai, was held in New York on Friday. At several panels and in an interview with ZDNET, the Google Brain founder offered advice about the future of the field.
AI has rapidly become a reliable coding assistant for many developers – so much so that many are wondering about the future of the entire profession. Entry-level coding jobs for recent graduates are declining as teams push junior tasks onto AI assistants; At the same time, experts cite the real limitations of these devices as evidence that engineers will never actually make them obsolete.
Also: Why AI coding tools like Cursor and Replit are doomed – and what comes next
Here’s what Ng had to say about how to deal with this uncertain future, why everyone should learn to code, and how governance should actually be done.
Coding still matters – something like this
“Since AI coding has greatly lowered the barrier to entry, I hope we can encourage everyone to learn to code – not just software engineers,” Ng said during his speech.
How AI will impact jobs and the future of work is still emerging. Regardless, Ng told ZDNET in an interview that he believes everyone should know the basics of using AI in code, which is the equivalent of knowing “a bit of math” — still a hard skill, but one that’s generally applicable across many careers for whatever you might need.
“One of the most important skills of the future is the ability to tell a computer what you want it to do for you,” he said, adding that everyone should know enough to speak the language of computers without needing to write code themselves. “The syntax, the arcane spells we use, are less important.”
Also: OpenAI tested GPT-5, Cloud, and Gemini on real-world tasks – the results were surprising
He said he wants to welcome Vibecoders as members of the community, even if they aren’t technically developers themselves. But he also doesn’t expect it to be easy. Despite noting that “it’s really obvious that code should be written with AI assistants,” Ng admitted that vibecoding – which he prefers to call “AI coding” – leaves him “mentally exhausted.”
to be a generalist
In his keynote speech, Ng said that because AI has accelerated software development so much, product management – not prototyping – is the new slow point for launching new products. To keep up with the pace that AI is making possible, he suggested that engineers should learn some product management skills to avoid that disruption.
“Engineers who learn some product work can clearly be a team,” he said.
Also: What Bill Gates actually said about AI replacing coding jobs
The call for all professionals – not just developers – to become generalists was reiterated throughout the summit. During a panel on developments in the AI age, Fabian Hedin, CTO of coding platform Lovable – one of the underdog startups on A16Z’s recent list – said that vibecoding could enable people with deep knowledge in a non-software subject to “iterate much faster than before” using coding skills. Moderator Laurence Moroney, director of AI at Arm, said it could make the most of an otherwise siled expert, changing how specific skills work in the workplace.
Ng said during the panel that the new challenge for developers will be to conceptualize what they want. Hedin agreed, saying that if AI is the future of coding, developers should focus on their intuition when creating a product or device.
“The thing AI will be worst at is understanding humans,” he said.
Why CS degree is not useful for students?
The realities of coding in the AI age have started to impact post-graduates struggling to find a job. Ng told ZDNET that computer science, once considered an infallible subject that guaranteed lucrative careers, is disappointing students.
He explained that overhiring by tech companies continued during the COVID-19 pandemic — and then ultimately reversed — primarily because entry-level coding jobs are difficult to find. However, beyond that, it is a question of graduates having the right kind of coding skills.
“AI has changed the way we write code, but frankly, many universities have been slow to adapt to the curriculum,” he said. “So if a university has not made significant changes to their curriculum after 2022, they are not preparing graduates for the jobs on the market today.”
Also: Gartner says AI will cause ‘jobs chaos’ in the next few years – what this means
Ng said he considers it “malpractice” for universities to award CS degrees without teaching those students how to optimize work with AI assistants.
He said, “I really feel bad that there are people today who are getting graduate degrees in computer science who have not made a single API call for a single AI model.” For them, re-orienting CS degrees around that reality will bridge the gap between the need for under-prepared graduates and AI-experienced coders. “For new college graduates who know those skills, we can’t find enough of them,” Ng said, a concern he also noted earlier this fall. x post,
Public fear of AI
In his keynote speech, Ng acknowledged that “AI has not yet won America’s hearts and minds,” referring to the often-circulated public perception of what AI could become at its worst. Several panelists called on the hundreds of developers in the audience to change that perception.
“You have this unique insight into what AI is not,” said Miriam Vogel, president and CEO of Equal AI. He urged developers not to ignore people’s fears about the technology, but to actively participate in AI literacy, saying that “we will fail” if this sentiment is not improved.
Ng feels that so far third parties have deliberately created fear of AI.
“I think a lot of the fear of AI was driven by a handful of businesses that ran almost, frankly, PR campaigns to make people afraid of AI, often down to lobbying,” he told ZDNET during our interview. “I think this does a lot of damage to American leadership for the field of AI and for developers.”
When asked how developers can influence this, he said he wants them to have candid conversations about what’s working and what’s not. “If the public understood this better, we could all come to more rational conclusions about technology,” he said.
Many of these fears arise from AGI, that somewhat undefined equivalent of human-level intelligence, which OpenAI and Microsoft, among other labs, have set their sights on with increasing intensity. Ng has long said these estimates are exaggerated.
“If you look at the incredibly messy training recipes that go into training these AI models, it makes no sense that this is AGI — if, by AGI, you mean any intellectual task that a human does,” Ng told ZDNET. “Clearly, much of that knowledge is still engineered into these systems, by very smart people, with lots of data.”
security and governance
In a panel conversation, Ng acknowledged that the public doesn’t really know what AI labs are doing, which could cause panic, but urged people not to “do a red teaming exercise and turn it into a media sensation.” Ng said he is not in favor of Anthropic’s brand of security and governance, which he finds somewhat limiting. Instead of undermining governance efforts, he emphasized a “guaranteed safe” sandboxed environment that does not hinder speed as a path toward responsible AI.
Vogel defined governance as “breaking down principles into actionable workflows”, not building bureaucracy. His concern was less about hyperscalers like OpenAI and Meta, and more about smaller AI companies that move in before they have developed any governance structures.
Regulating AI
“You can’t lead in AI by passing regulations,” Ng said during a panel on the EU’s approach to legislating on AI. He credited the Trump administration’s AI Action Plan, released last summer, for loosening federal regulations.
Many AI experts are concerned about the lack of US AI regulation. Some view the federal government’s failure to regulate social media platforms as an example of what can happen when AI exceeds the law. Ng told ZDNET that he believes this is a false equivalence.
Also: 8 ways to make responsible AI part of your company’s DNA
“I’m seeing far more bad regulatory proposals than good proposals,” Ng said in an interview. He said he looks forward to the non-consensual deepfake ban and the FTC’s actions against the companies. Using AI to measure “deceptive or unfair conduct”“As an example of good AI policy.
When asked if he would create any other regulations at the federal level, he said he wanted to have more transparency requirements for large AI companies from the beginning.
“When a lot of bad things happened on social media, none of us knew about it. Even people inside the business didn’t really know about it,” Ng told ZDNET. “If we have rules requiring only the largest companies – the largest companies – we don’t impose an undue compliance burden on smaller startups – but if we demand some level of transparency from businesses with very large numbers of users, that might give us a better signal to identify real problems, rather than relying on the luck of being a whistleblower.”

