Mozilla is in a tricky position. It contains both a nonprofit organization dedicated to making the internet a better place for everyone, and a for-profit arm dedicated to, you know, making money. In the best of times, these things feed each other: The company makes great products that advance its goals for the web, and the nonprofit gets to both advocate for a better web and show people what it looks like. But these are not the best of times. Mozilla has spent the last couple of years implementing layoffs and restructuring, attempting to explain how it can fight for privacy and openness when Google pays most of its bills, while trying to find its place in an increasingly frothy AI landscape.
Fun times to be the new Mozilla CEO, right? But when I put all that to Anthony Enzor-DeMeo, the company’s just-announced chief executive, he swears he sees opportunity in all the upheaval. “I think what’s actually needed now is a technology company that people can trust,” Enzor-DeMeo says. “What I’ve seen with AI is an erosion of trust.”
Mozilla is not going to train its own giant LLM anytime soon. But there’s still an AI Mode coming to Firefox next year, which Enzor-DeMeo says will offer users their choice of model and product, all in a browser they can understand and from a company they can trust. “We’re not incentivized to push one model or the other,” he says. “So we’re going to try to go to market with multiple models.”
-_-



Only ever dealt with dog-shit programmers, huh?
an elitist response.
here’s actual data https://survey.stackoverflow.co/2025/ai
half use AI tools daily
Not elitist to say that people who use what are essentially weighted random word generators for programming, a career that requires one to know exactly how their code works in case it breaks, are bad at their jobs. Just like how it’s not elitist to say that generated images are not art, and that flying a plane into a building doesn’t make you a good pilot.
Using AI doesn’t mean that you lose the ability to reason, debug, or test generated code. All code merge should be peer-reviewed and tested
We’re not discussing images, nor planes.
The claim was
tech savvy people, the same people who are most opposed to AI.There’s data that to suggest otherwise. people who are technically inclined engage with AI more and have a more positive reception compared to less experienced users.
Unless you have additional data to support that they are in-fact “dog-shit programmers”, this appears to be an emotional claim colored by your own personal bias. Though if you’re a “pure” programmer who is better than the dog-shit developers I would love to see some of your work or writings.
Oh I get it now- you ARE one of those dog-shit AI vibe programmers.
if that makes you feel better, but i wish you responded to the original claim with data vs ad hominem. but if you’re so good can i view your github to learn how you program?
Lol, no- because I’m not a programmer. I do server admin work- so I closely work with our programmers. I work with people that actually can do their job.
And for your data: https://www.media.mit.edu/publications/your-brain-on-chatgpt/
If it degrades brains with use in writing Essays, it will do the same in writing code.
it’s wild that someone who doesn’t program would degrade others who can because the majority of developers who use ai-based tools.
The study does not prove that claim. Better to link directly to the study https://arxiv.org/abs/2506.08872
The study does suggest only relying on ChatGPT could reduce engagement in the specific task of essay writing and harms recall. It does not prove ChatGPT is causing individuals to become less intelligent in the act of programming.
There are many studies that shows the potential positive outcomes when utilizing LLMs
[2512.13658] Embedding-Based Rankings of Educational Resources based on Learning Outcome Alignment: Benchmarking, Expert Validation, and Learner Performance https://arxiv.org/abs/2512.13658
[2509.15068] Learning in Context: Personalizing Educational Content with Large Language Models to Enhance Student Learning https://arxiv.org/abs/2509.15068
I degrade people that use a literal random word generator that shits out “code” and then claim they can code. I have respect for programmers. Not for AI programmers.
While AI use doesn’t guarantee loss of cognitive abilities there is growing research showing that it is likely.
https://www.psychologytoday.com/us/blog/the-art-of-critical-thinking/202512/is-generative-ai-rewiring-our-brains-heres-how-it-happens
I posted a response in another reply but thats for sharing something. still does not support the claim of dog-shit developers